Thursday, 11 June 2015

Clarifying Transparency


A dip of  the toe into the topic of 'transparency', aimed at making the various meanings of the term a little more transparent.

Andy Clark has defined transparent (and opaque) technologies in his book 'Natural-Born Cyborgs'; "A transparent technology is a technology that is so well fitted to, and integrated with, our own lives, biological capacities, and projects as to become (as Mark Weiser and Donald Norman have both stressed) almost invisible in use. An opaque technology, by contrast, is one that keeps tripping the user up, requires skills and capacities that do not come naturally to the biological organism, and thus remains the focus of attention even during routine problem-solving activity. Notice that “opaque,” in this technical sense, does not mean “hard to understand” as much as “highly visible in use.” I may not understand how my hippocampus works, but it is a great example of a transparent technology nonetheless. I may know exactly how my home PC works, but it is opaque (in this special sense) nonetheless, as it keeps crashing and getting in the way of what I want to do. In the case of such opaque technologies, we distinguish sharply and continuously between the user and the tool."
An example of the difference might be 3D interaction with and without head tracking.

Robert Hoffman and Dave Woods' Laws of Cognitive Work include Mr. Weasley’s Law: Humans should be supported in rapidly achieving a veridical and useful understanding of the “intent” and “stance” of the machines. [This comes from Harry Potter: “Never trust anything that can think for itself if you can’t see where it keeps its brain.”]. Gary Klein has discussed The Man behind the Curtain (from the Wizard of Oz). Information technology usually doesn’t let people see how it reasons; it’s not understandable.
Mihaela Vorvoreanu has picked up on The Discovery of Heaven, a novel of ideas by Dutch author Harry Mulisch: "He claims that power exists because of the Golden Wall that separates the masses (the public) from decision makers. Government, in his example, is a mystery hidden behind this Golden Wall, regarded by the masses (the subject of power) in awe. Once the Golden Wall falls (or becomes transparent), people see that behind it lies the same mess as outside it. There are people in there, too. Messy people, engaged in messy, imperfect decision making processes. The awe disappears. With it, the power. What happens actually, with the fall of the Golden Wall, is higher accountability and a more equitable distribution of power. Oh, and the risk of anarchy. But the Golden Wall must fall."

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability. Machine learning should be transparent to inspection e.g. for explanation, accountability or legal 'stare decisis'.
Alex Howard has argued for 'algorithmic transparency' in the use of big data for public policy. "Our world, awash in data, will require new techniques to ensure algorithmic accountability, leading the next-generation of computational journalists to file Freedom of Information requests for code, not just data, enabling them to reverse engineer how decisions and policies are being made by programs in the public and private sectors. To do otherwise would allow data-driven decision making to live inside of a black box, ruled by secret codes, hidden from the public eye or traditional methods of accountability. Given that such a condition could prove toxic to democratic governance and perhaps democracy itself, we can only hope that they succeed."
Algorithmic transparency seems linked to 'technological due process' proposed by Danielle Keats Citron. "A new concept of technological due process is essential to vindicate the norms underlying last century's procedural protections. This Article shows how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework of mechanisms capable of enhancing the transparency, accountability, and accuracy of rules embedded in automated decision-making systems."
Zach Blas has proposed the term 'informatic opacity: "Today, if control and policing dominantly operate through making bodies informatically visible, then informatic opacity becomes a prized means of resistance against the state and its identity politics. Such opaque actions approach capture technologies as one instantiation of the vast uses of representation and visibility to control and oppress, and therefore, refuse the false promises of equality, rights, and inclusion offered by state representation and, alternately create radical exits that open pathways to self-determination and autonomy. In fact, a pervasive desire to flee visibility is casting a shadow across political, intellectual, and artistic spheres; acts of escape and opacity are everywhere today!"

At the level of user interaction, Woods and Sarter use the term 'observability': "The key to supporting human-machine communication and system awareness is a high level of system observability. Observability is the technical term that refers to the cognitive work needed to extract meaning from available data (Rasmussen, 1985). This term captures the fundamental relationship among data, observer and context of observation that is fundamental to effective feedback. Observability is distinct from data availability, which refers to the mere presence of data in some form in some location. Observability refers to processes involved in extracting useful information. It results from the interplay between a human user knowing when to look for what information at what point in time and a system that structures data to support
attentional guidance.... A completely unobservable system is characterized by users in almost all cases asking a version of all three of the following questions: (1) What is the system doing? (2) Why is it doing that? (3) What is it going to do next? When designing joint cognitive systems, (1) is often addressed, as it is relatively easy to show the current state of as system. (2) is sometimes addressed, depending on how intent/targets are defined in the system, and (3) is rarely pursued as it is obviously quite difficult to predict what a complex joint system is going to do next, even if the automaton is deterministic.
"

Gudela Grote's (2005) concept of 'Zone of No Control' is important: "Instead of lamenting the lack of human control over technology and of demanding over and over again that control be reinstated, the approach presented here assumes very explicitly that current and future technology contains more or less substantial zones of no control. Any system design should build on this assumption and develop concepts for handling the lack on control in a way that does not delegate the responsibility to the human operator, but holds system developers, the organizations operating the systems, and societal actors accountable. This could happen much more effectively if uncertainties were made transparent and the human operator were relieved of his or her stop-gap and backup function."

Friday, 5 June 2015

Giving automation a personality

Kathy Abbott wrote: "LESSON 8: Be cautious about referring to automated systems as another crewmember. We hear talk about “pilot’s associate,” “electronic copilots” and other such phrases. While automated systems are becoming increasingly capable, they are not humans. When we attribute human characteristics to automated systems, there is some risk of creating false expectations about strengths and limitations, and encouraging reliance that leads to operational vulnerabilities (see Lesson 1)."
The topic of personality for automation is one of four I have termed 'jokers' - issues where there is no 'right' design solution, and where the badness of the solution needs to be managed through life. (The others are risk compensation, automation bias, and moral buffering).
Jaron Lanier called the issue of anthropomorphism “the abortion question of the computer world”—a debate that forced people to take sides regarding “what a person is and is not.” In an article he said "The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
In other words, if you go back to some of the thought experiments from philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up."
Matthias Scheutz notes "Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine affect will then themselves have the potential for suffering, which leads to the “Affect Dilemma for Artificial Agents, and more generally, artificial systems." In addition to the Affect Dilemma, Scheutz notes Emotional Dependence: "emotional dependence on social robots is different from other human dependencies on technology (e.g., different both in kind and quality from depending on one’s cell phone, wrist watch, or PDA).... It is important in this context to note how little is required on the robotic side to cause people to form relationship with robots."
Clifford Nass has proposed the Computers-Are-Social-Actors (CASA) paradigm: "people’s responses to computers are fundamentally “social”—that is, people apply social rules, norms, and expectations core to interpersonal relationships when they interact with computers. In light of the CASA paradigm, identifying the conditions that foster or undermine trust in the context of interpersonal communication and relationships may help us better understand the trust dynamics in human-computer communication. This chapter discusses experimental studies grounded in the CASA paradigm that demonstrate how (1) perceived people-computer similarity in personality, (2) manifestation of caring behaviors in computers, and (3) consistency in human/non-human representations of computers affect the extent to which people perceive computers as trustworthy."
The philosopher Jurgen Habermas has proposed that action can be considered from a number of viewpoints.  To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental action and strategic action.  "Instrumental action is governed by technical rules based on empirical knowledge.  In every case they imply empirical predictions about observable events, physical or social."  Strategic action is part-technical, part-social and refers to the decision-making procedure, and is at the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing by values and maxims.  Communicative action "is governed by consensual norms, which define reciprocal expectations about behaviour and which must be understood and recognized by at least two acting subjects.  Social norms are enforced by sanctions....Violation of a rule has different consequence according to the type.  Incompetent behaviour which violates valid technical rules or strategies, is condemned per se to failure through lack of success; the 'punishment' is built, so to speak, into its rebuff by reality.  Deviant behaviour, which violates consensual norms, provokes sanctions that are connected with the rules only externally, that is by convention.  Learned rules of purposive-rational action supply us with skills, internalized norms with personality structures.  Skills put us into a position to solve problems, motivations allow us to follow norms." 

The figure below illustrates the different types of action in relation to a temperature limit in an aircraft jet engine, as knowledge processing moves from design information to the development of operating procedures to operation.

Physical behaviour (say blade root deflection as a function of temperature) constitutes instrumental action and may be gathered from a number of rigs and models.  The weighting to be given to the various sources of data, the error bands to be considered and the type of criteria to use constitute strategic action.  The decision by the design community to set a limit (above which warranty or disciplinary considerations might be applied) is communicative action.  The operator (currently) has some access to instrumental action, and has strategic and communicative actions that relate to operation rather than design. In terms of providing operator support, instrumental action can be treated computationally, strategic action can be addressed by decision support tools, but communicative action is not tractable.  The potential availability of all information is bound to challenge norms that do not align with purposive-rational action.  The need for specific operating limits to support particular circumstances will challenge the treatment of generalised strategic action.  The enhanced communication between designers and operators is likely to produce greater clarity in distinguishing what constitutes an appropriate physical limit for a particular circumstance, and what constitutes a violation.
Automating the decision making of the design community (say by 'big data') looks 'challenging' for all but instrumental action.
So,
1. Users are going to assign human qualities to automation, whether the designers plan for it or not. Kathy Abbott's caution is futile. It is going to happen so far as the user is concerned.
2. It is probably better, therefore, to consider the automation's personality during design, to minimise the 'false expectations' that Kathy Abbott identifies.
3. Designing-in a personality isn't going to be easy. The 'smarter' the system, the harder (and the more important) it is, is my guess. Enjoy the current state of the art with a Dalek Relaxation Tape.

Friday, 13 September 2013

Getting a Windows 8 PC to be usable and useful

The infamous start page is less of a deal than reported. It is mostly full of junk that is easily removed, but the small tiles are probably still a bad way to access a substantial set of applications. The desktop is only a click away.  I plan to leave installing RetroUI until after Windows 8.1 has been assimilated. RetroUI looks like it might have the ability to turn the start page into something useful, and some other interesting possibilities.

The full-screen 'apps' are a complete disaster from a user point of view on first encounter. Fortunately there are free alternatives (please donate where you can) that are better than MS offerings, which I would use anyway.

Apologies for the lack of links in what follows, but things are a bit fraught here. DYOR and YMMV of course, but my starter pack looks something like:
CCleaner of course.
Foobar for music and VLC player for other media.  You might want to add MakeMKV or FreeRIP.
Libre Office; The quirks of client templates mean that I will also need MS Office and MS have done the dirty as regards running earlier versions, but IMHO the open office spreadsheets are much better than Excel. Notepad++ may have features that are worth having over Notepad - depends on your usage.Blue Griffon for web page writing, including drafting blog posts such as this. Not sorted out .pdf applications yet.
Irfanview, Photofiltre, Inkscape, YEd for graphics.
Browsers of your choice; It is a real shame what has happened to Opera - Firefox seeems to be the only capable browser around now. SRWare Iron is essentially Chrome with all the right privacy settings and is good for simple surfing.
You really need a file manager with Win8; I paid for Powerdesk Pro 9, but it wouldn't run; found FreeCommander - nearly as good, free, and it works [update: Powerdesk runs fine under Win8.1, and is significantly better than FreeCommander]. FreeFileSync for rapid and flexible synchronizing of folders; In my experience it sometimes leaves junk folders starting FFS around, which need checking for content before deleting. Copernic for desktop search; At the start of Longhorn, "Where's my stuff?" was Bill Gates' big challenge for the OS that became Vista/7/8 but nothing seems to have happened.
PhraseExpress for keyboard shortcuts/macros/spellchecking/quotes.

This post at Lifehacker is good on alternatives to what comes with the machine.


Making it feel like home (surprisingly important) meant importing the coffee bean .bmp to tile on the desktop - it didn't seem to be on the machine.

Update after initial use of Win8.1

 The update wasnt' an update. It was an App in the Store. Here on Solaris III that wasn't obvious. Bing searches on the MS web site didn't help - had to get google to tell me. Apart from that, painless.
BUT they really want you to be assimilated. Transferring between your MS account and your local account always follows the path of maximum difficulty. For instance, there is no Freecell installed (and the old freecell.exe won't run - Update - solution here). Ah, there is an App; this means lots of going into your MS account and giving permission for it to access all sorts of things, then fighting your way back to a local account where I don't think it works. Far better to download free freecell solitaire from CNET (apparently a better game anyway). Why is is "my documents" but "your account" anyway? Just one of many instances of muddled inconsistency that bureaucracies produce when they don't do user testing.

Win8.1 is definitely an improvement on Win8. Given the outcry against Win8, it is still remarkable that there can have been no proper UX involvement, or just simple user testing before releasing Win8.1, however.

The Start screen is quite nice, and visually better than the old Start menu. BUT when you download applications, be sure to check that the icon is 'pinned to start' and maybe also 'pinned to taskbar'; otherwise you will be rummaging through Program Files. The pinning dialogue does, of course, have some annoying inconsistencies. The tile grid would be a good way to lay out options if MS didn't constrain the layout so much. Start out by getting rid of as much junk as you can.

Even without using the Apps, you are forced to have some un-Appy moments interacting with Win8.1. When in the middle of some mindboggling interaction remember that Esc won't work but that the Windows key gets you to the Start page. RetroUI is less necessary than it would have been with Win8, but may be worth it - I am still considering getting it.

The Start page does not have a search box - you just type and it appears. Some numpty must have thought that was as cool as Cupertino. Ok once you know. BUT it seems to be useless. If you want to know how to fix annoying aspects of Win8.1, google it. So far for me, this has included:
  • Restoring 'confirm' before delete. (hint: wastebasket properties).
  • Getting rid of the obtrusive 'help',which is even worse than Clippy was - at least Clippy didn't take up a quarter of the screen.
  • Moving between MS and local accounts, staying away from Skydrive, getting out of the MS account once forced to be in it.
  • Finding a workaround for the loss of Start - documents; made a desktop shortcut to 'Recent Items'. The MS website proposal didn't match the Win8.1 UI.

"What now? - Oh, that what now..."

Some of the revisionist capitalist-running-dog press with "leaks" of an update to Win8.1 are trying to airbrush what a disaster the Win8/8/1 UI is. I trust they were paid in silver rather than lunch. A group of schoolchildren with a UX project would not try to impose a phone touchscreen interface on a desktop monitor. To be that crass, you need a roomful of balding shouty predatory Silicon Valley business leaders. The penny is starting to drop in terms of updates to unwind this folly.
 Couple this with @tomiahonen's forecast that Nokia/MS/Windows phones are doomed, poor sales of Windows tablets to business, and we need to look elsewhere. The ending of support for XP and Win7 must be alarming a good many organizations. The move to open source formats in the public sector comes just at the wrong time for MS, and a free alternative to MS Office is very appealing in a time of austerity.
Apple and I parted company a long time ago. IMHO Apple without Steve Jobs is on its way to becoming as loved as Adobe (happy to be proved wrong). For business use, Android is a mess. So by default my next tech project is to try Linux - probably LXLE on an old machine. I just don't see the alternative.

Sunday, 7 July 2013

A Human-Centred view of Science

'Purity' is one of my favourite XKCD comics.


Purity
It summarizes a particular view of science - that of 'Single vision and Newton's Sleep' from William Blake.

Taking a more human-centred view of science is in line with Aristotle - "The proper study of man is man", or Protagoras' statement that "Man is the measure of all things". Such a view gives us something more like this.


"I maintain that the human mystery is incredibly demeaned by scientific reductionism, with its claim in promissory materialism to account eventually for all of the spiritual world in terms of patterns of neuronal activity. This belief must be classed as a superstition. . . . we have to recognize that we are spiritual beings with souls existing in a spiritual world as well as material beings with bodies and brains existing in a material world."
Sir John Eccles --Evolution of the Brain, Creation of the Self, p. 241



Tuesday, 25 June 2013

Automation and context of use

The Heyns report on Lethal Autonomous Robotics makes a distinction between autonomous and automatic.

“Autonomous” needs to be distinguished from “automatic” or “automated.” Automatic systems, such as household appliances, operate within a structured and predictable environment. Autonomous systems can function in an open environment, under unstructured and dynamic circumstances. As  such their actions (like those of humans) may ultimately be unpredictable, especially in situations as chaotic as  armed conflict, and even more so when they interact with other autonomous systems.

Good point to make. The concept of 'predictability' for autonomous systems has problems when the environment introduces complexity - even for a simple device.


Here is a simple example of simple automation that did not capture the context of use.


Manual tap and soap dispenser.

Automatic tap (sensor under outlet turns on water flow).

See the problem? You reach across for the soap and soak your sleeve.

So - Human Centred Design is necessary even for the simplest of automatic systems. Context is just about everything.

Thursday, 18 April 2013

Human-Centred Management - a case for standards?

This post is a follow-up to a debate at the IEHF Conference led by Dr Scott Steedman CBE, Director of Standards, BSI. That background has not been added yet, so the post may not be clear as it stands.

 We 'know what good looks like' as regards the human-centred management of people in enterprises. This note gives some pointers to that literature, with an emphasis on Pfeffer and Sutton's work on Evidence Based Management. A good summary can be found in the Happy Manifesto by Henry Stewart. The first chapter is called "Enable People to Work at Their Best". Perhaps using this knowledge to produce an inspriational standard would help the cause.
We need to promote what ought to be commonsense because it is overwhelmed by technocratic command and control thinking and an obsession with 'leadership'. The zillions of Something Management System standards promote the mechanistic management of things. Whilst this might be useful, on the basis of 'what gets measured gets done', such mechanistic procedures exacerbate some of the flaws in our society. A human-centred approach needs to be promoted to at least restore the balance. Fortunately the wherewithal to do this in a 'third generation' way have already been developed.

O'Reilly and Pfeffer have contrasted conventional strategy with values based strategy as follows:


[Based on: Hidden Value: How Great Companies Achieve Extraordinary Results With Ordinary People (Harvard Business School Press)]. I have had to deliver Value Plans working for BAE Systems - challenging and genuinely useful in my experience.

In 'The human equation: building profits by putting people first' Pfeffer has shown that a human-centred approach yields long-term business benefit.

We are overdue a paradigm change in the approach to people and safety. The new view of system safety has been well-developed by Woods, Dekker and others. 'How Complex Systems Fail' (pdf) would be a good starting point. The new paradigm has not taken hold (yet). Steven Shorrock has just written a terrific blog post on why this may be. Perhaps a standard would help.

There is over sixty years of literature and practice on Socio-Technical Systems - the conceptual foundation of ergonomics and human-centred approaches. A pointer to that literature can be found here.  In recent years, John Seddon's proprietary Vanguard implementation of systems thinking has found success in the UK public sector, producing benefits considerably in excess of a 20% target.

I will conclude with Pfeffer and Sutton "The single best diagnostic to see if an organization is innovating, learning, and capable of turning knowledge into action is 'What happens when they make a mistake?' "

Saturday, 23 March 2013

Air Traveller User Experience (UX)

Air travellers are faced with conflicting stereotypes for document scanners; face up or face down. The check-in machine shown here expects my passport face-up.


The e-passport reader expects it face-down (which matches my expectation). This article says the future for boarding card readers is face up. Glasgow Airport has just installed face-down readers. It is clearly going to be a confusing mess for the next decade. Not life-threatening, but  along with the security theatre, a signifier of the clueless authoritarianism that lurks behind the functionalist aesthetic.


A collection of recently-collected confusing iconography above (not air travel, but while travelling). The first sign does NOT mean that you are safe from flames in the lift. It is very unclear what the sign adds to the text in the second one. The bottom indicator was clear to the designer, I'm sure..

The picture above is from the Hamburg Metro at the airport. A true gem. To go to the city centre, you press button 3. Not that button 3 - the one on the screen.

UX is about more than just functionalism. Going through Heathrow, I was delighted to see this picture of Herne the Hunter.

The celebration of local mythology is to be welcomed , but does it have to be so functionalist? A more evocative image is this one:




The UX of air travel is affected by the sense of place. For British airports, it is adversely affected by a complete lack of any sense of place from a combination of soulless functionalism and relentless mercantilism. Glasgow Airport was (properly) designed by Basil Spence, who ".. wanted a design which helped the traveller to feel the adventure of flying from this particular airport”. Well, the feel of adventure has gone, and the design has been buried in extensions. It is still possible to see the back of the original terminal.

The good news is that Wetherspoons understand a sense of place. They have put up a poster to Spence and provided a place where you can appreciate the  canopy (originally outside the building of course).