Wednesday 30 December 2015

Providing assurance of machine decision making

All Models Are Wrong But Some Are Useful” -George Box

The aim of Human-Machine Teams (HMT) is to make rapid decisions under changing situations characterised by uncertainty. The aim of much modern automation is to enable machines to make such decisions for use by people or other machines. The process of converting incomplete, uncertain, conflicting, context-sensitive data to an outcome or decision needs to be effective, efficient, and to provide some freedom from risk. It also may need to reflect human values, legislation, social justice etc. How can the designer or operator of such an automated system provide assurance of the quality of decision making (potentially to customers, users, regulators, society at large)? 'Transparency' is part of the answer, but the practical meaning of transparency has still to be worked out.

The philosopher Jurgen Habermas has proposed that action can be considered from a number of viewpoints. To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental action and strategic action. Strategic action is part-technical, part-social and refers to the decision-making procedure, and is a the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing by values and maxims. It may be that Value Sensitive Design forms a useful supplement to Human-Centred Design to address values.

The Myth of Rationality

"Like the magician who consults a chicken’s entrails, many organizational decision makers insist that the facts and figures be examined before a policy decision is made, even though the statistics provide unreliable guides as to what is likely to happen in the future. And, as with the magician, they or their magic are not discredited when events prove them wrong. (…) It is for this reason that anthropologists often refer to rationality as the myth of modern society, for, like primitive myth, it provides us with a comprehensive frame of reference, or structure of belief, through which we can negotiate day-to-day experience and make it intelligible."
Gareth Morgan

The Myth of Rationality is discussed e.g.  here.  The limits of rationality (or perhaps its irrelevance) in military situations  should be obvious. If you need a refresher, then try Star Trek 'The Galileo Seven'. The myth of the rational manager is discussed here. This is not to say that vigilant decision making is a bad thing - quite the opposite. As Lee Frank points out, rationality is not the same as being able to rationalise.

The need for explanation / transparency

The need for transparency and/or observability is discussed in a previous post here. There is an interaction between meeting this need and the approach to decision making. AFAIK the types of Machine Learning (ML) currently popular with the majors cannot produce a rationalisation/explanation for decisions/outcomes, which would seem a serious shortcoming for applications such as healthcare. If I am a customer, how can I gain assurance that a system will give the various users the explanations they need?

Approach to decision making

It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits; it is evidently equally foolish to accept probable reasoning from a mathematician and to demand from a rhetorician scientific proofs.” Aristotle
At some point, someone has to decide how the machine is to convert data to outcomes (what might have been called an Inference Engine at one point). There is a wide range of choices; the numeric/symbolic split, algorithms, heuristics, statistics, ML, neural nets, rule induction. In some cases, the form of decision making is inherent in the tool used e.g. constraint-based planning tool, forward-chaining production system, truth maintenance system etc. There are choices to be made in search (depth vs. breadth) and in types of logic or reasoning to be used. There were attempts before the AI winter to match problem type to implementation but IMHO they didn't finish the job, and worked-up methodologies such as CommonKADS would be a hard sell now. So, what guidance is available to system designers, and what forms of assurance can be offered to customers at design time? Genuine question.

Sunday 20 December 2015

Human Machine Teaming - Data Quality Management

"A mathematician is a man who is willing to assume anything except responsibility." (Theodore von Karman)

"Rapid, effective decision making under conditions of uncertainty whilst retaining Meaningful Human Control (MHC)" is the sort of mantra associated with Human Machine Teaming (HMT). A purely mathematical approach to risk and uncertainty is unlikely to match the needs of real world operation, as Wall St. has discovered.

So, during the design of a system where the data are potentially incomplete, uncertain, contradictory etc. how does the designer offer assurance that data quality is being addressed in an appropriate manner? Or are we doomed to crafted systems on the basis of "trust me"?

Not all forms of uncertainty should be treated in the same way; this applies to data fusion, say, and most other tasks. It is my impression that the literature on data quality and information quality is not being used widely in the AI, ML, HMT community just now - I'd be delighted to be corrected on that.

 ISO/IEC 25012 “Software Engineering – Software Product Quality Requirements and Evaluation (SQuaRE) – “Data Quality Model”, 2008 categorises quality attributes into fifteen characteristics from two different perspectives: inherent and system dependent ones. This framework may or may not be appropriate to all applications of HMT but it makes the point that there is more than just "uncertainty". Richard Y Wang has proposed that "incorporating quality information explicitly in the development of information systems can be surprisingly useful"  in the context of military image recognition.

HMT takes place in the context of Organisational Information Processing. The good news is that this is quite well-developed for flows within an organisation (less so for dealing with an opposing organisation). The bad news is that Weick is hard work. The key term is equivocality, and I suggest that the HMT community use it as an umbrella term, embracing 'uncertainty' and other such parameters. Media richness theory helps.

"A man's gotta know his limitations" (Clint Eastwood). "So does a robot" (BSJ)

A key driver for data quality management is whether a system (or agent etc.) assumes an open world or a closed one. Closed world processing has to know the fine details e.g. how a Google self-driving car interacts with a cyclist on a fixed-wheel bicycle.  By contrast, GeckoSystems  takes an open world approach to 'sense and avoid' and doesn't have to know these fine details. It would seem that closed world processing needs explicit treatment of data quality to avoid brittleness.

Time flies like an arrow, fruit flies like a banana.

At some point, the parameters acquire meaning, or semantic values. "We won’t be surfing with search engines any more. We’ll be trawling with engines of meaning." (Bruce Sterling). The parameters may be classified on the basis of a folksonomy, or the results of knowledge elicitation. So far as I can see, the Semantic Revolution has a way to run before achieving dependable performance. Roger Schank has been fairly blunt about the present state of the art. Semantic parameters are likely to have contextual sensitivity, which may be hard to characterise.

If a system is to support human decision making, then it may need to provide information well beyond that required analytically for the derivation of a mathematical solution. Accordingly, the system may need to manage data about the quality of processing.   For robotic state estimation, the user may need more than a point best estimate. Confidence estimates may need to be expressed in operational terms, rather than mathematical ones. Indeed, the HMT may need to reason about uncertainty as much as under uncertainty.

This post is scrappy and home-brewed. Suggestions for improvement are welcome. If I am anywhere near right, then the state of art needs advancing quite swiftly. As a customer I wouldn't know how to gain assurance that the management of data quality would support safe and effective operation, and as a project manager, I wouldn't know how to offer such assurance.

Update: This is nice on unknown unknowns and approaches to uncertainty in data:

Friday 11 December 2015

Human-Machine Teaming - meeting the Centaur challenge

At the centre of the US DoD Third Offset is Human-Machine Teaming (HMT), with five building blocks:
  1. Machine Learning
  2. Autonomy / AI
  3. Human-Machine Collaboration
  4. Assisted human operations
  5. Autonomous weapons.
The analogy with Centaur Chess is a powerful one, and potentially offers the best use for both people (H) and machines (M). However, this approach is not easy to implement.This post is a quick look at some issues of design and development for HMT. Other aspects of HMT will be addressed in subsequent posts (hopefully).

1. Human-Centred Automation

The problems of SAGE, one of the first automated systems, were well-documented in 1963. Most automated systems built now still have the same problems. "H" managed to get the UK MoD to use the phrase "So-called unmanned systems" to reflect their reality. There are people working on autonomous systems who really believe there will be no human involvement or oversight. These people will, of course, build systems that don't work. In summary, the state of the art is not good - an engineering led technical focus leads to "human error".
The principles of human-centred automation were set out by Billings in 1991:
  • To command effectively, the human operator must be involved.
  • To be involved, the human operator must be informed.
  • The human operator must be able to monitor automated systems.
  • Automation systems must be predictable.
  • The automated system must also be able to monitor the human operator.
  • Each of the elements of the system must have knowledge of the other’s intent. 
We know a great deal about the human aspects of automation. The problem is getting this knowledge applied.
There is a considerable literature on technical aspects of HMT, including work on the Pilot's Associate / Electronic Crewmember. The challenge is with getting this expertise used.

2. Human-System Integration process

Human-System Integration (HSI) is more talked-about than done. For HMT, HSI has to be pretty central to design, development, and operation. This will require enlightened engineers, programmers, risk managers etc. There are standards etc. for HSI (e.g. the Incremental Commitment Model), though these do not address HMT-specific matters.

The state of Cognitive Systems Engineering (CSE) is lamentable. I can take some share of the blame here, having dropped my topics of interest in the AI winter (the day job got in the way). Nearly all of it is academic as opposed to practical. Some of the more visible approaches have very little cognition,minimal systems thinking and no connection with engineering. Gary Klein's work is probably the best place to find practical resources (starting with Decision Centred Design).
MANPRINT; the integration of people and machines may go very deep and require closer coupling of Human Factors Engineering and Human Resources (selection, training, career structures etc) than has been the case to date. Not easy at scale.
Simulation-based design is probably the way to achieve iteration through to wargaming to support operation. Obviously there are issues of fidelity (realism) here, but they should be manageable.

3. Capability, ownership, responsibilities

The industrial capability to deliver HMT is limited, and the small pool of expertise is divided by the AI winter. Caveat emptor will be vital, and specalist capability evaluation tools for HMT don't exist (though HSI capability evaluation tools could be expanded to do the job). 'Saw one, did one, taught one' won't work here unless you want to fail.
The data (big or otherwise), algorithms, heuristics, rules, concepts, folksonomies etc. are core to military operations (and may be sensitive). It would be best if they were owned and managed by a responsible military organisation, rather than a contractor. In a sense, they could be considered an expansion of doctrine.

4. Test, acceptance

If HMT is to work as a team, then it may well be that the M becomes personally tailored to the individual H. This goes against military procurement in general, and raises questions about how to conduct T&E and acceptance. If the M evolves in harmony with the H, then this raises further difficulties. Not insuperable, but certainly challenging. Probably simpler in the context of the extended military responsibility proposed above.

5.State of art

We are seeing the return of hype in AI. Sadly, it seems little was learned from the problems of the previous phase, exacerbated by somewhat impractical hype on ethics.
It is still as much craft as engineering to build responsible systems; there is a real shortage of good design guidance. HMT has been the province of the lab, and has not been translated into anything resembling mainstream system acquisition. Much to do.

Thursday 3 December 2015

Smart shipping and the human element

Martin Stopford (MS) has written about 'smart' shipping here and here. There is a related article here and videos here and here. He makes a number of important points about seafaring. This article picks up some of these points and responds a) because Martin Stopford's proposal is likely to be influential and b) because it has the potential to be positive for seafarers. This is an opportunity to be grasped with both hands by those concerned with seafarers or the human element.
MS summary: 
Smart shipping would bring about a much greater integration of ship operations with the internet and big data. The Smart Shipping model focuses on the transport performance of the company/fleet as a whole, rather than a collection of individual ships, resulting in wide reaching improvements in transport productivity; safety; personnel development; and logistics. The need is to spend an appropriate amount of money on how assets are going to be used.



Four problems with the existing business are identified; firstly that the technology used was old and economies of scale had been taken to an extreme; second there was a real problem in attracting crew; third a change in the market so that two-thirds of the cargo was controlled by non-OECD countries; and fourth the industry has very weak customer relationships.


1. From Gambling to Management

MS summary:
Shipping needs to move from gambling to management The problem with the bulk shipping business over the last 20 -30 years is its been a gambling business not a management business,”“It's a management solution, you semi-automate ship operations, you semi-automate navigation and you implement door-to-door logistics.". The focus here is on optimizing the overall management of the business by treating the transport performance of ships as a single production unit, like a BMW car plant. The result is QA systems that really work, not a set of manuals nobody consults.
Good management is quite straightforward, and well summarised in Henry Stewart's Happy Manifesto. His Four Steps to Happiness are also good sense. Henry Mintzberg's brief 'Musings on Management' are as relevant as ever, particularly the change from top-down to inside-out.
The challenge for shipping is: Is shipping smart enough to embark on becoming a Wirearchy? “a dynamic two-way flow of power and authority, based on knowledge, trust, credibility and a focus on results, enabled by interconnected people and technology”.
The 'smart' in smart shipping may signal a move to a knowledge economy and intellectual capital.

2. Customer relations and innovation

MS Summary:
“We need to put these things together and squeeze some more value out the transport chain and put a smile on the customer’s face. The customer should not be the person you are beating to death over the negotiating table.”
"The way you treat your employees is the way they will treat your customers" Richard Branson

Eric von Hippel and his team at MIT have made the case for user-centred innovation, where ideas from users and customers are sought and used. 'Lead' users may not just be a source of innovation, they may be the innovators themselves
As regards the necessity for such disruption, do remember, if you don't disrupt yourself, someone else will do it for you.

3. Jobs, Job To Be Done (JTBD), careers, incentives

MS summary:
The need is for a business model which allows employees in a shipping company to be effective as a single team, with better and more rewarding career opportunities for young people and greater integration between ship and shore. Tomorrow's shipping is most of all about people.Better use of people as a resource. Manage ship and shore personnel into a more productive team with better career opportunities. Break down ship-shore barrier, create a team spirit, opportunities for a career. Build a whole new culture. Run the fleet as a team. Experienced engineers ashore, junior ones at sea getting responsibility early, with support. We’ve got 1.7 people onshore, 20 people on the ship, the 20 people on the ship hate the people onshore and the 1.7 onshore think the guys on the ship are a load of idiots. Is that the way to run a business? Integrate the systems and get it to run better.
Automate & de-skill ship operations & navigation: "It's not about the crew, it's about automation of navigation"
The proposal here is very exciting. It will require technology and jobs to be developed together, probably using Human Centred Design (HCD). Demonstrations and trial facilities will probably be needed if STCW is to advance at any sort of pace.
There will need to be a careful watch on incentives that block progress:“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Upton Sinclair

4. Getting there - managing change

MS Summary:
We should not assume it will be easy.
The stakeholder gridlock in shipping will make change difficult. The full horrors of the alternative - of not changing - may not have been thought through yet.
The need for change is clear: "The Internet is nothing less than an extinction-level event for the traditional firm as we have known it for the past 100 years. The Internet makes it possible to create totally new forms of economic entities." Esko Kilpi   Bandwidth limitations have sheltered shipping from the full force of the internet. That protection is ending.
As regards the process of organisational change, it is important to remember Virginia Satir's remark "No one likes to be should upon" For a few resources on change, see here.

5. The 'Human Error' Reduction Fallacy

Unsurprisingly, Martin Stopford has fallen for the 'human error' reduction fallacy being pushed by the 'toys for boys' autonomous ship/car/toaster crowd. However he recognises that driverless ships are a very difficult topic which should only be considered when the industry has much more experience and depth.
"Human error is the symptom of system failure, not the cause" Dan Maurinho "In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid." James Reason
The 'human error' reduction fallacy is an unhealthy way to approach the design of automatic or autonomous systems.It is also likely to miss any big business opportunities that emerge. Human-Centred Automation based on Billings would be the basis of safe and effective operation.

6. Big data and safety

MS Summary:
Centralise analysis. Moving to shipping and maritime economics, this could turn out to be a tremendously exciting era for Maritime analysts.
It is early days, but it seems that, left to itself, big data tends to become big brother, using the panopticon for micro-management. Platform capitalism seems to like moral buffering upstream of the algorithms, with a moral crumple zone for the folk at the sharp end. This is not what shipping needs. It also looks like it isn't enough on its own. Here is an example from healthcare:
"In the real world, a big factor in patient health are social factors like mental health, social isolation, and transportation issues. Since this data is not typically collected, it is largely ignored by Big Data analytics. By collecting this data in a structured way, it can combine with the clinical data to create a truly complete care plan for the patient."
Centralised analysis may be right for economics, but for management, big data needs to be used to support smart people who can appreciate the context. "Appreciate the situation, don't situate the appreciation".

7. Automation to support a new business model.

MS summary:
Smart phone style apps and standard interfaces. De-skilling. Don't just automate navigation, but onboard operations, systems.
Using systems that are familiar to people from the 21st Century sounds a good idea. Folk are going to use that which is easy to use, whether Type Approved or not. Commonality with what comes up the gangway is sensible.
De-skilling by automation has usually been counter-productive. Changing the organisation to give more responsibility to junior people by supporting them with a network of other people plus automation - that is do-able.
Automation and IT need to be seen as oxygen, not lubricating oil. Chris Boorman has a nice piece on the difference between human-centred automation and human replacement automation. "Automation enables enterprises to automate those core processes not to make cuts, but to free up resource to work on new disruptive projects. Faced with an increasingly complex world of technology - cloud, mobile, big data, internet of things - as well as growing consumer expectations, every business needs to turn to automation or perish....Every industry is going through a period of change as new technologies and new entrants look to disrupt the status-quo.  Automation is a key enabler for helping enterprises to disrupt their own industries and drive that change.  Acquiring new customers, retaining customers, driving business analytics, consolidating enterprises following mergers or driving agility and speed are all critical business imperatives.  Automation delivers the efficiency and enables the new way of thinking from your brightest talent to succeed."

8. Setting expectations

As Martin Stopford has recognised, moving to smart shipping is not going to be easy. A detailed passage plan is obviously inappropriate, but some sort of route with easy stages might be welcome.

9. Early actions

Some early actions are obvious:
  • Set the means of achieving scalable learning in place, including trying out some creation spaces.
  • Chris Boorman again:"Automation needs to be ingrained in an organization’s DNA early on and not deployed later as a replacement measure for existing job functions. It should instead be used to allow people and resources to be more focused on driving the business forwards, rather than on just keeping the lights on." The ingraining needs to start now.
  • Rob Miles has proposed levels of enlightenment (later slides in the presentation) as regards integrating safety into business. Smart shipping will need to have some enlightenment, and this will need to include the regulators.
Footnote: I understand the objections to the term 'human element' and sympathise. However, it is the IMO term. If we can use it to convey a Socio-Technical Systems (STS) approach, with a human element and a technical element, then it will do some good. See here for resources on STS.

The gloomy bits: From the CyClaDes EU project on crew-centred design, it has become apparent to me that shipping is far from ready to do crew-centred design - there is a long way to travel for all stakeholders. The 'human error' issue also goes to the heart of matters, from accident investigation through to daily operations.The smart people needed for smart shipping includes all sorts of people. e.g. engine experts who fit the wrong rings in the wrong grooves.

Thursday 1 October 2015

Automation anxiety


Thoughts before the Big Potatoes event on Automation Anxiety.
Firstly, with no negotiation, everyone interested in the topic needs to watch and read Bruce Sterling on Smart City States. We need to understand the money before we look at the technology. Sterling's book on the Internet of Things is good (also widely available as an eBook).
The best summary of 'the future of work' that I've seen is this by Janna Anderson. Quite long, but as brief as it could be, given the breadth of coverage.
The topic of people and technology has been debated for some long time without much resolution, which must say something. Here is Paul Goodman in 1969 - Can technology be humane?

My first thoughts are in the mind map above, and very gloomy they are.
My work has been concerned with encouraging the adoption of a human centred approach to design and operation, mostly in a technicalcontext . The default approach is human replacement automation, The problems with this have been well-documented at regular intervals, starting with Nehemiah Jordan in 1963, working on SAGE. It is very difficult to shift engineers, their customers, their managers, to a human centred approach. Things are still at the guerilla usability level of warfare, winning small battles slowly. So the Cambrian Explosion of automation coming our way will be annoying and hard to use. We will have micromanagement, BS jobs, etc. Whitlock's very sensible Human Values will continue to be ignored in a transactional economy.
I would contend that platform capitalism, masquerading as the sharing economy , (see here too)  is winning, and that platform cooperativism is not going to catch up, unless a mass of cavalry appears out of the sky. Capital will continue to outperform labour, and hence inequality will grow. Who owns the robots matters a lot, and it isn't looking good for the likes of me.  Your local regulator is going to get crushed.
Secure jobs have gone - join the precariat. The sustainability of professions such as lawyer or surgeon is now under question because of the impact of automation. Maybe one day people will choose to be artisanal surgeons, but the disruption between now and then is going to be a rough ride.
I am aware that Human Resources departments have their limitations, but I fear that people analytics will be worse, and less ethical.
Finally, because of the gloomy nature of my own thoughts, I asked some Scandinavian friends for their views. One working in Norway is up to his eyes in automation / autonomy. His involvement means that the sponsors want a human-centred approach, and his work will deliver this. Not happening in UK/US much, I fear. A Dane with a fairly global perspective sees his industry imbued with some techno-utopian thinking, which it doesn't have the capability to deliver.  A Swede, who was active in Swedish human-centred work is now trying to export this to an Anglo-Saxon economy. She is unsure that the Nordic economies will be able to continue in their human-centred ways and resist the globalisation challenge.

Friday 25 September 2015

Ergonomics - the taxi driver test

How are we to communicate ergonomics to the population at large? - asks Sarah Sharples, as President of the Chartered Institute of Ergonomics and Human Factors.
My short answer is - I don't try to.
"What is, or are, ergonomics? What is, or are, Human Factors? If ergonomics and Human Factors are the same, then what is "ergonomics AND Human Factors?" These questions - and their answers - confuse people, and rightly so.
Human-Centred Design, on the other hand, enters people's vocabulary on one hearing. Generally, folk are pleased to hear it exists, and annoyed that it is not the norm in equal measure.

I practice communicating Human-Centred Design with the population at large by wearing the jacket in the picture. I forget about the writing on the back, so I am surprised when people in a queue ask me "What is Human-Centred Design?". I have got better at giving easily-understood answers. The guy in the chip shop was up for a long conversation on the merits of early Nokia phones (thank you Timo).

On my business card etc. I describe myself as a People-Systems Integrator, and this seems to be easily understood.

Ergonomics now tries to be a 'discipline' that does 'science' and a 'profession' that does 'practice', and the result is a mess. The explanatory logo at the International Ergonomics Association website has only one test label up front and high-contrast - Human Centered Design.
Most areas of work distinguish professional practice and underpinning science, e.g.
Professional practiceUnderpinning scientific discipline
FarmingAgricultural research
MedicineMedical research, immunology, physiology etc.
ArchitectureArchitectural research
Software engineeringComputer science
1970's: ErgonomicsErgonomics research, human sciences
2015 formal: ErgonomicsErgonomics
2015 IRL: UX, HCD, IA, ErgonomicsHuman sciences, social sciences, design thinking, Ergonomics


Thursday 11 June 2015

Clarifying Transparency


A dip of  the toe into the topic of 'transparency', aimed at making the various meanings of the term a little more transparent.

Andy Clark has defined transparent (and opaque) technologies in his book 'Natural-Born Cyborgs'; "A transparent technology is a technology that is so well fitted to, and integrated with, our own lives, biological capacities, and projects as to become (as Mark Weiser and Donald Norman have both stressed) almost invisible in use. An opaque technology, by contrast, is one that keeps tripping the user up, requires skills and capacities that do not come naturally to the biological organism, and thus remains the focus of attention even during routine problem-solving activity. Notice that “opaque,” in this technical sense, does not mean “hard to understand” as much as “highly visible in use.” I may not understand how my hippocampus works, but it is a great example of a transparent technology nonetheless. I may know exactly how my home PC works, but it is opaque (in this special sense) nonetheless, as it keeps crashing and getting in the way of what I want to do. In the case of such opaque technologies, we distinguish sharply and continuously between the user and the tool."
An example of the difference might be 3D interaction with and without head tracking.

Robert Hoffman and Dave Woods' Laws of Cognitive Work include Mr. Weasley’s Law: Humans should be supported in rapidly achieving a veridical and useful understanding of the “intent” and “stance” of the machines. [This comes from Harry Potter: “Never trust anything that can think for itself if you can’t see where it keeps its brain.”]. Gary Klein has discussed The Man behind the Curtain (from the Wizard of Oz). Information technology usually doesn’t let people see how it reasons; it’s not understandable.
Mihaela Vorvoreanu has picked up on The Discovery of Heaven, a novel of ideas by Dutch author Harry Mulisch: "He claims that power exists because of the Golden Wall that separates the masses (the public) from decision makers. Government, in his example, is a mystery hidden behind this Golden Wall, regarded by the masses (the subject of power) in awe. Once the Golden Wall falls (or becomes transparent), people see that behind it lies the same mess as outside it. There are people in there, too. Messy people, engaged in messy, imperfect decision making processes. The awe disappears. With it, the power. What happens actually, with the fall of the Golden Wall, is higher accountability and a more equitable distribution of power. Oh, and the risk of anarchy. But the Golden Wall must fall."

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability. Machine learning should be transparent to inspection e.g. for explanation, accountability or legal 'stare decisis'.
Alex Howard has argued for 'algorithmic transparency' in the use of big data for public policy. "Our world, awash in data, will require new techniques to ensure algorithmic accountability, leading the next-generation of computational journalists to file Freedom of Information requests for code, not just data, enabling them to reverse engineer how decisions and policies are being made by programs in the public and private sectors. To do otherwise would allow data-driven decision making to live inside of a black box, ruled by secret codes, hidden from the public eye or traditional methods of accountability. Given that such a condition could prove toxic to democratic governance and perhaps democracy itself, we can only hope that they succeed."
Algorithmic transparency seems linked to 'technological due process' proposed by Danielle Keats Citron. "A new concept of technological due process is essential to vindicate the norms underlying last century's procedural protections. This Article shows how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework of mechanisms capable of enhancing the transparency, accountability, and accuracy of rules embedded in automated decision-making systems."
Zach Blas has proposed the term 'informatic opacity: "Today, if control and policing dominantly operate through making bodies informatically visible, then informatic opacity becomes a prized means of resistance against the state and its identity politics. Such opaque actions approach capture technologies as one instantiation of the vast uses of representation and visibility to control and oppress, and therefore, refuse the false promises of equality, rights, and inclusion offered by state representation and, alternately create radical exits that open pathways to self-determination and autonomy. In fact, a pervasive desire to flee visibility is casting a shadow across political, intellectual, and artistic spheres; acts of escape and opacity are everywhere today!"

At the level of user interaction, Woods and Sarter use the term 'observability': "The key to supporting human-machine communication and system awareness is a high level of system observability. Observability is the technical term that refers to the cognitive work needed to extract meaning from available data (Rasmussen, 1985). This term captures the fundamental relationship among data, observer and context of observation that is fundamental to effective feedback. Observability is distinct from data availability, which refers to the mere presence of data in some form in some location. Observability refers to processes involved in extracting useful information. It results from the interplay between a human user knowing when to look for what information at what point in time and a system that structures data to support
attentional guidance.... A completely unobservable system is characterized by users in almost all cases asking a version of all three of the following questions: (1) What is the system doing? (2) Why is it doing that? (3) What is it going to do next? When designing joint cognitive systems, (1) is often addressed, as it is relatively easy to show the current state of as system. (2) is sometimes addressed, depending on how intent/targets are defined in the system, and (3) is rarely pursued as it is obviously quite difficult to predict what a complex joint system is going to do next, even if the automaton is deterministic.
"

Gudela Grote's (2005) concept of 'Zone of No Control' is important: "Instead of lamenting the lack of human control over technology and of demanding over and over again that control be reinstated, the approach presented here assumes very explicitly that current and future technology contains more or less substantial zones of no control. Any system design should build on this assumption and develop concepts for handling the lack on control in a way that does not delegate the responsibility to the human operator, but holds system developers, the organizations operating the systems, and societal actors accountable. This could happen much more effectively if uncertainties were made transparent and the human operator were relieved of his or her stop-gap and backup function."

Friday 5 June 2015

Giving automation a personality

Kathy Abbott wrote: "LESSON 8: Be cautious about referring to automated systems as another crewmember. We hear talk about “pilot’s associate,” “electronic copilots” and other such phrases. While automated systems are becoming increasingly capable, they are not humans. When we attribute human characteristics to automated systems, there is some risk of creating false expectations about strengths and limitations, and encouraging reliance that leads to operational vulnerabilities (see Lesson 1)."
The topic of personality for automation is one of four I have termed 'jokers' - issues where there is no 'right' design solution, and where the badness of the solution needs to be managed through life. (The others are risk compensation, automation bias, and moral buffering).
Jaron Lanier called the issue of anthropomorphism “the abortion question of the computer world”—a debate that forced people to take sides regarding “what a person is and is not.” In an article he said "The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
In other words, if you go back to some of the thought experiments from philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up."
Matthias Scheutz notes "Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine affect will then themselves have the potential for suffering, which leads to the “Affect Dilemma for Artificial Agents, and more generally, artificial systems." In addition to the Affect Dilemma, Scheutz notes Emotional Dependence: "emotional dependence on social robots is different from other human dependencies on technology (e.g., different both in kind and quality from depending on one’s cell phone, wrist watch, or PDA).... It is important in this context to note how little is required on the robotic side to cause people to form relationship with robots."
Clifford Nass has proposed the Computers-Are-Social-Actors (CASA) paradigm: "people’s responses to computers are fundamentally “social”—that is, people apply social rules, norms, and expectations core to interpersonal relationships when they interact with computers. In light of the CASA paradigm, identifying the conditions that foster or undermine trust in the context of interpersonal communication and relationships may help us better understand the trust dynamics in human-computer communication. This chapter discusses experimental studies grounded in the CASA paradigm that demonstrate how (1) perceived people-computer similarity in personality, (2) manifestation of caring behaviors in computers, and (3) consistency in human/non-human representations of computers affect the extent to which people perceive computers as trustworthy."
The philosopher Jurgen Habermas has proposed that action can be considered from a number of viewpoints.  To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental action and strategic action.  "Instrumental action is governed by technical rules based on empirical knowledge.  In every case they imply empirical predictions about observable events, physical or social."  Strategic action is part-technical, part-social and refers to the decision-making procedure, and is at the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing by values and maxims.  Communicative action "is governed by consensual norms, which define reciprocal expectations about behaviour and which must be understood and recognized by at least two acting subjects.  Social norms are enforced by sanctions....Violation of a rule has different consequence according to the type.  Incompetent behaviour which violates valid technical rules or strategies, is condemned per se to failure through lack of success; the 'punishment' is built, so to speak, into its rebuff by reality.  Deviant behaviour, which violates consensual norms, provokes sanctions that are connected with the rules only externally, that is by convention.  Learned rules of purposive-rational action supply us with skills, internalized norms with personality structures.  Skills put us into a position to solve problems, motivations allow us to follow norms." 

The figure below illustrates the different types of action in relation to a temperature limit in an aircraft jet engine, as knowledge processing moves from design information to the development of operating procedures to operation.

Physical behaviour (say blade root deflection as a function of temperature) constitutes instrumental action and may be gathered from a number of rigs and models.  The weighting to be given to the various sources of data, the error bands to be considered and the type of criteria to use constitute strategic action.  The decision by the design community to set a limit (above which warranty or disciplinary considerations might be applied) is communicative action.  The operator (currently) has some access to instrumental action, and has strategic and communicative actions that relate to operation rather than design. In terms of providing operator support, instrumental action can be treated computationally, strategic action can be addressed by decision support tools, but communicative action is not tractable.  The potential availability of all information is bound to challenge norms that do not align with purposive-rational action.  The need for specific operating limits to support particular circumstances will challenge the treatment of generalised strategic action.  The enhanced communication between designers and operators is likely to produce greater clarity in distinguishing what constitutes an appropriate physical limit for a particular circumstance, and what constitutes a violation.
Automating the decision making of the design community (say by 'big data') looks 'challenging' for all but instrumental action.
So,
1. Users are going to assign human qualities to automation, whether the designers plan for it or not. Kathy Abbott's caution is futile. It is going to happen so far as the user is concerned.
2. It is probably better, therefore, to consider the automation's personality during design, to minimise the 'false expectations' that Kathy Abbott identifies.
3. Designing-in a personality isn't going to be easy. The 'smarter' the system, the harder (and the more important) it is, is my guess. Enjoy the current state of the art with a Dalek Relaxation Tape.