Thursday 11 June 2015

Clarifying Transparency


A dip of  the toe into the topic of 'transparency', aimed at making the various meanings of the term a little more transparent.

Andy Clark has defined transparent (and opaque) technologies in his book 'Natural-Born Cyborgs'; "A transparent technology is a technology that is so well fitted to, and integrated with, our own lives, biological capacities, and projects as to become (as Mark Weiser and Donald Norman have both stressed) almost invisible in use. An opaque technology, by contrast, is one that keeps tripping the user up, requires skills and capacities that do not come naturally to the biological organism, and thus remains the focus of attention even during routine problem-solving activity. Notice that “opaque,” in this technical sense, does not mean “hard to understand” as much as “highly visible in use.” I may not understand how my hippocampus works, but it is a great example of a transparent technology nonetheless. I may know exactly how my home PC works, but it is opaque (in this special sense) nonetheless, as it keeps crashing and getting in the way of what I want to do. In the case of such opaque technologies, we distinguish sharply and continuously between the user and the tool."
An example of the difference might be 3D interaction with and without head tracking.

Robert Hoffman and Dave Woods' Laws of Cognitive Work include Mr. Weasley’s Law: Humans should be supported in rapidly achieving a veridical and useful understanding of the “intent” and “stance” of the machines. [This comes from Harry Potter: “Never trust anything that can think for itself if you can’t see where it keeps its brain.”]. Gary Klein has discussed The Man behind the Curtain (from the Wizard of Oz). Information technology usually doesn’t let people see how it reasons; it’s not understandable.
Mihaela Vorvoreanu has picked up on The Discovery of Heaven, a novel of ideas by Dutch author Harry Mulisch: "He claims that power exists because of the Golden Wall that separates the masses (the public) from decision makers. Government, in his example, is a mystery hidden behind this Golden Wall, regarded by the masses (the subject of power) in awe. Once the Golden Wall falls (or becomes transparent), people see that behind it lies the same mess as outside it. There are people in there, too. Messy people, engaged in messy, imperfect decision making processes. The awe disappears. With it, the power. What happens actually, with the fall of the Golden Wall, is higher accountability and a more equitable distribution of power. Oh, and the risk of anarchy. But the Golden Wall must fall."

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability. Machine learning should be transparent to inspection e.g. for explanation, accountability or legal 'stare decisis'.
Alex Howard has argued for 'algorithmic transparency' in the use of big data for public policy. "Our world, awash in data, will require new techniques to ensure algorithmic accountability, leading the next-generation of computational journalists to file Freedom of Information requests for code, not just data, enabling them to reverse engineer how decisions and policies are being made by programs in the public and private sectors. To do otherwise would allow data-driven decision making to live inside of a black box, ruled by secret codes, hidden from the public eye or traditional methods of accountability. Given that such a condition could prove toxic to democratic governance and perhaps democracy itself, we can only hope that they succeed."
Algorithmic transparency seems linked to 'technological due process' proposed by Danielle Keats Citron. "A new concept of technological due process is essential to vindicate the norms underlying last century's procedural protections. This Article shows how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework of mechanisms capable of enhancing the transparency, accountability, and accuracy of rules embedded in automated decision-making systems."
Zach Blas has proposed the term 'informatic opacity: "Today, if control and policing dominantly operate through making bodies informatically visible, then informatic opacity becomes a prized means of resistance against the state and its identity politics. Such opaque actions approach capture technologies as one instantiation of the vast uses of representation and visibility to control and oppress, and therefore, refuse the false promises of equality, rights, and inclusion offered by state representation and, alternately create radical exits that open pathways to self-determination and autonomy. In fact, a pervasive desire to flee visibility is casting a shadow across political, intellectual, and artistic spheres; acts of escape and opacity are everywhere today!"

At the level of user interaction, Woods and Sarter use the term 'observability': "The key to supporting human-machine communication and system awareness is a high level of system observability. Observability is the technical term that refers to the cognitive work needed to extract meaning from available data (Rasmussen, 1985). This term captures the fundamental relationship among data, observer and context of observation that is fundamental to effective feedback. Observability is distinct from data availability, which refers to the mere presence of data in some form in some location. Observability refers to processes involved in extracting useful information. It results from the interplay between a human user knowing when to look for what information at what point in time and a system that structures data to support
attentional guidance.... A completely unobservable system is characterized by users in almost all cases asking a version of all three of the following questions: (1) What is the system doing? (2) Why is it doing that? (3) What is it going to do next? When designing joint cognitive systems, (1) is often addressed, as it is relatively easy to show the current state of as system. (2) is sometimes addressed, depending on how intent/targets are defined in the system, and (3) is rarely pursued as it is obviously quite difficult to predict what a complex joint system is going to do next, even if the automaton is deterministic.
"

Gudela Grote's (2005) concept of 'Zone of No Control' is important: "Instead of lamenting the lack of human control over technology and of demanding over and over again that control be reinstated, the approach presented here assumes very explicitly that current and future technology contains more or less substantial zones of no control. Any system design should build on this assumption and develop concepts for handling the lack on control in a way that does not delegate the responsibility to the human operator, but holds system developers, the organizations operating the systems, and societal actors accountable. This could happen much more effectively if uncertainties were made transparent and the human operator were relieved of his or her stop-gap and backup function."

Friday 5 June 2015

Giving automation a personality

Kathy Abbott wrote: "LESSON 8: Be cautious about referring to automated systems as another crewmember. We hear talk about “pilot’s associate,” “electronic copilots” and other such phrases. While automated systems are becoming increasingly capable, they are not humans. When we attribute human characteristics to automated systems, there is some risk of creating false expectations about strengths and limitations, and encouraging reliance that leads to operational vulnerabilities (see Lesson 1)."
The topic of personality for automation is one of four I have termed 'jokers' - issues where there is no 'right' design solution, and where the badness of the solution needs to be managed through life. (The others are risk compensation, automation bias, and moral buffering).
Jaron Lanier called the issue of anthropomorphism “the abortion question of the computer world”—a debate that forced people to take sides regarding “what a person is and is not.” In an article he said "The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
In other words, if you go back to some of the thought experiments from philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up."
Matthias Scheutz notes "Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine affect will then themselves have the potential for suffering, which leads to the “Affect Dilemma for Artificial Agents, and more generally, artificial systems." In addition to the Affect Dilemma, Scheutz notes Emotional Dependence: "emotional dependence on social robots is different from other human dependencies on technology (e.g., different both in kind and quality from depending on one’s cell phone, wrist watch, or PDA).... It is important in this context to note how little is required on the robotic side to cause people to form relationship with robots."
Clifford Nass has proposed the Computers-Are-Social-Actors (CASA) paradigm: "people’s responses to computers are fundamentally “social”—that is, people apply social rules, norms, and expectations core to interpersonal relationships when they interact with computers. In light of the CASA paradigm, identifying the conditions that foster or undermine trust in the context of interpersonal communication and relationships may help us better understand the trust dynamics in human-computer communication. This chapter discusses experimental studies grounded in the CASA paradigm that demonstrate how (1) perceived people-computer similarity in personality, (2) manifestation of caring behaviors in computers, and (3) consistency in human/non-human representations of computers affect the extent to which people perceive computers as trustworthy."
The philosopher Jurgen Habermas has proposed that action can be considered from a number of viewpoints.  To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental action and strategic action.  "Instrumental action is governed by technical rules based on empirical knowledge.  In every case they imply empirical predictions about observable events, physical or social."  Strategic action is part-technical, part-social and refers to the decision-making procedure, and is at the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing by values and maxims.  Communicative action "is governed by consensual norms, which define reciprocal expectations about behaviour and which must be understood and recognized by at least two acting subjects.  Social norms are enforced by sanctions....Violation of a rule has different consequence according to the type.  Incompetent behaviour which violates valid technical rules or strategies, is condemned per se to failure through lack of success; the 'punishment' is built, so to speak, into its rebuff by reality.  Deviant behaviour, which violates consensual norms, provokes sanctions that are connected with the rules only externally, that is by convention.  Learned rules of purposive-rational action supply us with skills, internalized norms with personality structures.  Skills put us into a position to solve problems, motivations allow us to follow norms." 

The figure below illustrates the different types of action in relation to a temperature limit in an aircraft jet engine, as knowledge processing moves from design information to the development of operating procedures to operation.

Physical behaviour (say blade root deflection as a function of temperature) constitutes instrumental action and may be gathered from a number of rigs and models.  The weighting to be given to the various sources of data, the error bands to be considered and the type of criteria to use constitute strategic action.  The decision by the design community to set a limit (above which warranty or disciplinary considerations might be applied) is communicative action.  The operator (currently) has some access to instrumental action, and has strategic and communicative actions that relate to operation rather than design. In terms of providing operator support, instrumental action can be treated computationally, strategic action can be addressed by decision support tools, but communicative action is not tractable.  The potential availability of all information is bound to challenge norms that do not align with purposive-rational action.  The need for specific operating limits to support particular circumstances will challenge the treatment of generalised strategic action.  The enhanced communication between designers and operators is likely to produce greater clarity in distinguishing what constitutes an appropriate physical limit for a particular circumstance, and what constitutes a violation.
Automating the decision making of the design community (say by 'big data') looks 'challenging' for all but instrumental action.
So,
1. Users are going to assign human qualities to automation, whether the designers plan for it or not. Kathy Abbott's caution is futile. It is going to happen so far as the user is concerned.
2. It is probably better, therefore, to consider the automation's personality during design, to minimise the 'false expectations' that Kathy Abbott identifies.
3. Designing-in a personality isn't going to be easy. The 'smarter' the system, the harder (and the more important) it is, is my guess. Enjoy the current state of the art with a Dalek Relaxation Tape.