Thursday, 1 October 2015

Automation anxiety

Thoughts before the Big Potatoes event on Automation Anxiety.
Firstly, with no negotiation, everyone interested in the topic needs to watch and read Bruce Sterling on Smart City States. We need to understand the money before we look at the technology. Sterling's book on the Internet of Things is good (also widely available as an eBook).
The best summary of 'the future of work' that I've seen is this by Janna Anderson. Quite long, but as brief as it could be, given the breadth of coverage.
The topic of people and technology has been debated for some long time without much resolution, which must say something. Here is Paul Goodman in 1969 - Can technology be humane?

My first thoughts are in the mind map above, and very gloomy they are.
My work has been concerned with encouraging the adoption of a human centred approach to design and operation, mostly in a technicalcontext . The default approach is human replacement automation, The problems with this have been well-documented at regular intervals, starting with Nehemiah Jordan in 1963, working on SAGE. It is very difficult to shift engineers, their customers, their managers, to a human centred approach. Things are still at the guerilla usability level of warfare, winning small battles slowly. So the Cambrian Explosion of automation coming our way will be annoying and hard to use. We will have micromanagement, BS jobs, etc. Whitlock's very sensible Human Values will continue to be ignored in a transactional economy.
I would contend that platform capitalism, masquerading as the sharing economy , (see here too)  is winning, and that platform cooperativism is not going to catch up, unless a mass of cavalry appears out of the sky. Capital will continue to outperform labour, and hence inequality will grow. Who owns the robots matters a lot, and it isn't looking good for the likes of me.  Your local regulator is going to get crushed.
Secure jobs have gone - join the precariat. The sustainability of professions such as lawyer or surgeon is now under question because of the impact of automation. Maybe one day people will choose to be artisanal surgeons, but the disruption between now and then is going to be a rough ride.
I am aware that Human Resources departments have their limitations, but I fear that people analytics will be worse, and less ethical.
Finally, because of the gloomy nature of my own thoughts, I asked some Scandinavian friends for their views. One working in Norway is up to his eyes in automation / autonomy. His involvement means that the sponsors want a human-centred approach, and his work will deliver this. Not happening in UK/US much, I fear. A Dane with a fairly global perspective sees his industry imbued with some techno-utopian thinking, which it doesn't have the capability to deliver.  A Swede, who was active in Swedish human-centred work is now trying to export this to an Anglo-Saxon economy. She is unsure that the Nordic economies will be able to continue in their human-centred ways and resist the globalisation challenge.

Friday, 25 September 2015

Ergonomics - the taxi driver test

How are we to communicate ergonomics to the population at large? - asks Sarah Sharples, as President of the Chartered Institute of Ergonomics and Human Factors.
My short answer is - I don't try to.
"What is, or are, ergonomics? What is, or are, Human Factors? If ergonomics and Human Factors are the same, then what is "ergonomics AND Human Factors?" These questions - and their answers - confuse people, and rightly so.
Human-Centred Design, on the other hand, enters people's vocabulary on one hearing. Generally, folk are pleased to hear it exists, and annoyed that it is not the norm in equal measure.

I practice communicating Human-Centred Design with the population at large by wearing the jacket in the picture. I forget about the writing on the back, so I am surprised when people in a queue ask me "What is Human-Centred Design?". I have got better at giving easily-understood answers. The guy in the chip shop was up for a long conversation on the merits of early Nokia phones (thank you Timo).

On my business card etc. I describe myself as a People-Systems Integrator, and this seems to be easily understood.

Ergonomics now tries to be a 'discipline' that does 'science' and a 'profession' that does 'practice', and the result is a mess. The explanatory logo at the International Ergonomics Association website has only one test label up front and high-contrast - Human Centered Design.
Most areas of work distinguish professional practice and underpinning science, e.g.
Professional practiceUnderpinning scientific discipline
FarmingAgricultural research
MedicineMedical research, immunology, physiology etc.
ArchitectureArchitectural research
Software engineeringComputer science
1970's: ErgonomicsErgonomics research, human sciences
2015 formal: ErgonomicsErgonomics
2015 IRL: UX, HCD, IA, ErgonomicsHuman sciences, social sciences, design thinking, Ergonomics

Thursday, 11 June 2015

Clarifying Transparency

A dip of  the toe into the topic of 'transparency', aimed at making the various meanings of the term a little more transparent.

Andy Clark has defined transparent (and opaque) technologies in his book 'Natural-Born Cyborgs'; "A transparent technology is a technology that is so well fitted to, and integrated with, our own lives, biological capacities, and projects as to become (as Mark Weiser and Donald Norman have both stressed) almost invisible in use. An opaque technology, by contrast, is one that keeps tripping the user up, requires skills and capacities that do not come naturally to the biological organism, and thus remains the focus of attention even during routine problem-solving activity. Notice that “opaque,” in this technical sense, does not mean “hard to understand” as much as “highly visible in use.” I may not understand how my hippocampus works, but it is a great example of a transparent technology nonetheless. I may know exactly how my home PC works, but it is opaque (in this special sense) nonetheless, as it keeps crashing and getting in the way of what I want to do. In the case of such opaque technologies, we distinguish sharply and continuously between the user and the tool."
An example of the difference might be 3D interaction with and without head tracking.

Robert Hoffman and Dave Woods' Laws of Cognitive Work include Mr. Weasley’s Law: Humans should be supported in rapidly achieving a veridical and useful understanding of the “intent” and “stance” of the machines. [This comes from Harry Potter: “Never trust anything that can think for itself if you can’t see where it keeps its brain.”]. Gary Klein has discussed The Man behind the Curtain (from the Wizard of Oz). Information technology usually doesn’t let people see how it reasons; it’s not understandable.
Mihaela Vorvoreanu has picked up on The Discovery of Heaven, a novel of ideas by Dutch author Harry Mulisch: "He claims that power exists because of the Golden Wall that separates the masses (the public) from decision makers. Government, in his example, is a mystery hidden behind this Golden Wall, regarded by the masses (the subject of power) in awe. Once the Golden Wall falls (or becomes transparent), people see that behind it lies the same mess as outside it. There are people in there, too. Messy people, engaged in messy, imperfect decision making processes. The awe disappears. With it, the power. What happens actually, with the fall of the Golden Wall, is higher accountability and a more equitable distribution of power. Oh, and the risk of anarchy. But the Golden Wall must fall."

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability. Machine learning should be transparent to inspection e.g. for explanation, accountability or legal 'stare decisis'.
Alex Howard has argued for 'algorithmic transparency' in the use of big data for public policy. "Our world, awash in data, will require new techniques to ensure algorithmic accountability, leading the next-generation of computational journalists to file Freedom of Information requests for code, not just data, enabling them to reverse engineer how decisions and policies are being made by programs in the public and private sectors. To do otherwise would allow data-driven decision making to live inside of a black box, ruled by secret codes, hidden from the public eye or traditional methods of accountability. Given that such a condition could prove toxic to democratic governance and perhaps democracy itself, we can only hope that they succeed."
Algorithmic transparency seems linked to 'technological due process' proposed by Danielle Keats Citron. "A new concept of technological due process is essential to vindicate the norms underlying last century's procedural protections. This Article shows how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework of mechanisms capable of enhancing the transparency, accountability, and accuracy of rules embedded in automated decision-making systems."
Zach Blas has proposed the term 'informatic opacity: "Today, if control and policing dominantly operate through making bodies informatically visible, then informatic opacity becomes a prized means of resistance against the state and its identity politics. Such opaque actions approach capture technologies as one instantiation of the vast uses of representation and visibility to control and oppress, and therefore, refuse the false promises of equality, rights, and inclusion offered by state representation and, alternately create radical exits that open pathways to self-determination and autonomy. In fact, a pervasive desire to flee visibility is casting a shadow across political, intellectual, and artistic spheres; acts of escape and opacity are everywhere today!"

At the level of user interaction, Woods and Sarter use the term 'observability': "The key to supporting human-machine communication and system awareness is a high level of system observability. Observability is the technical term that refers to the cognitive work needed to extract meaning from available data (Rasmussen, 1985). This term captures the fundamental relationship among data, observer and context of observation that is fundamental to effective feedback. Observability is distinct from data availability, which refers to the mere presence of data in some form in some location. Observability refers to processes involved in extracting useful information. It results from the interplay between a human user knowing when to look for what information at what point in time and a system that structures data to support
attentional guidance.... A completely unobservable system is characterized by users in almost all cases asking a version of all three of the following questions: (1) What is the system doing? (2) Why is it doing that? (3) What is it going to do next? When designing joint cognitive systems, (1) is often addressed, as it is relatively easy to show the current state of as system. (2) is sometimes addressed, depending on how intent/targets are defined in the system, and (3) is rarely pursued as it is obviously quite difficult to predict what a complex joint system is going to do next, even if the automaton is deterministic.

Gudela Grote's (2005) concept of 'Zone of No Control' is important: "Instead of lamenting the lack of human control over technology and of demanding over and over again that control be reinstated, the approach presented here assumes very explicitly that current and future technology contains more or less substantial zones of no control. Any system design should build on this assumption and develop concepts for handling the lack on control in a way that does not delegate the responsibility to the human operator, but holds system developers, the organizations operating the systems, and societal actors accountable. This could happen much more effectively if uncertainties were made transparent and the human operator were relieved of his or her stop-gap and backup function."

Friday, 5 June 2015

Giving automation a personality

Kathy Abbott wrote: "LESSON 8: Be cautious about referring to automated systems as another crewmember. We hear talk about “pilot’s associate,” “electronic copilots” and other such phrases. While automated systems are becoming increasingly capable, they are not humans. When we attribute human characteristics to automated systems, there is some risk of creating false expectations about strengths and limitations, and encouraging reliance that leads to operational vulnerabilities (see Lesson 1)."
The topic of personality for automation is one of four I have termed 'jokers' - issues where there is no 'right' design solution, and where the badness of the solution needs to be managed through life. (The others are risk compensation, automation bias, and moral buffering).
Jaron Lanier called the issue of anthropomorphism “the abortion question of the computer world”—a debate that forced people to take sides regarding “what a person is and is not.” In an article he said "The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
In other words, if you go back to some of the thought experiments from philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up."
Matthias Scheutz notes "Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine affect will then themselves have the potential for suffering, which leads to the “Affect Dilemma for Artificial Agents, and more generally, artificial systems." In addition to the Affect Dilemma, Scheutz notes Emotional Dependence: "emotional dependence on social robots is different from other human dependencies on technology (e.g., different both in kind and quality from depending on one’s cell phone, wrist watch, or PDA).... It is important in this context to note how little is required on the robotic side to cause people to form relationship with robots."
Clifford Nass has proposed the Computers-Are-Social-Actors (CASA) paradigm: "people’s responses to computers are fundamentally “social”—that is, people apply social rules, norms, and expectations core to interpersonal relationships when they interact with computers. In light of the CASA paradigm, identifying the conditions that foster or undermine trust in the context of interpersonal communication and relationships may help us better understand the trust dynamics in human-computer communication. This chapter discusses experimental studies grounded in the CASA paradigm that demonstrate how (1) perceived people-computer similarity in personality, (2) manifestation of caring behaviors in computers, and (3) consistency in human/non-human representations of computers affect the extent to which people perceive computers as trustworthy."
The philosopher Jurgen Habermas has proposed that action can be considered from a number of viewpoints.  To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental action and strategic action.  "Instrumental action is governed by technical rules based on empirical knowledge.  In every case they imply empirical predictions about observable events, physical or social."  Strategic action is part-technical, part-social and refers to the decision-making procedure, and is at the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing by values and maxims.  Communicative action "is governed by consensual norms, which define reciprocal expectations about behaviour and which must be understood and recognized by at least two acting subjects.  Social norms are enforced by sanctions....Violation of a rule has different consequence according to the type.  Incompetent behaviour which violates valid technical rules or strategies, is condemned per se to failure through lack of success; the 'punishment' is built, so to speak, into its rebuff by reality.  Deviant behaviour, which violates consensual norms, provokes sanctions that are connected with the rules only externally, that is by convention.  Learned rules of purposive-rational action supply us with skills, internalized norms with personality structures.  Skills put us into a position to solve problems, motivations allow us to follow norms." 

The figure below illustrates the different types of action in relation to a temperature limit in an aircraft jet engine, as knowledge processing moves from design information to the development of operating procedures to operation.

Physical behaviour (say blade root deflection as a function of temperature) constitutes instrumental action and may be gathered from a number of rigs and models.  The weighting to be given to the various sources of data, the error bands to be considered and the type of criteria to use constitute strategic action.  The decision by the design community to set a limit (above which warranty or disciplinary considerations might be applied) is communicative action.  The operator (currently) has some access to instrumental action, and has strategic and communicative actions that relate to operation rather than design. In terms of providing operator support, instrumental action can be treated computationally, strategic action can be addressed by decision support tools, but communicative action is not tractable.  The potential availability of all information is bound to challenge norms that do not align with purposive-rational action.  The need for specific operating limits to support particular circumstances will challenge the treatment of generalised strategic action.  The enhanced communication between designers and operators is likely to produce greater clarity in distinguishing what constitutes an appropriate physical limit for a particular circumstance, and what constitutes a violation.
Automating the decision making of the design community (say by 'big data') looks 'challenging' for all but instrumental action.
1. Users are going to assign human qualities to automation, whether the designers plan for it or not. Kathy Abbott's caution is futile. It is going to happen so far as the user is concerned.
2. It is probably better, therefore, to consider the automation's personality during design, to minimise the 'false expectations' that Kathy Abbott identifies.
3. Designing-in a personality isn't going to be easy. The 'smarter' the system, the harder (and the more important) it is, is my guess. Enjoy the current state of the art with a Dalek Relaxation Tape.

Friday, 13 September 2013

Getting a Windows 8 PC to be usable and useful

The infamous start page is less of a deal than reported. It is mostly full of junk that is easily removed, but the small tiles are probably still a bad way to access a substantial set of applications. The desktop is only a click away.  I plan to leave installing RetroUI until after Windows 8.1 has been assimilated. RetroUI looks like it might have the ability to turn the start page into something useful, and some other interesting possibilities.

The full-screen 'apps' are a complete disaster from a user point of view on first encounter. Fortunately there are free alternatives (please donate where you can) that are better than MS offerings, which I would use anyway.

Apologies for the lack of links in what follows, but things are a bit fraught here. DYOR and YMMV of course, but my starter pack looks something like:
CCleaner of course.
Foobar for music and VLC player for other media.  You might want to add MakeMKV or FreeRIP.
Libre Office; The quirks of client templates mean that I will also need MS Office and MS have done the dirty as regards running earlier versions, but IMHO the open office spreadsheets are much better than Excel. Notepad++ may have features that are worth having over Notepad - depends on your usage.Blue Griffon for web page writing, including drafting blog posts such as this. Not sorted out .pdf applications yet.
Irfanview, Photofiltre, Inkscape, YEd for graphics.
Browsers of your choice; It is a real shame what has happened to Opera - Firefox seeems to be the only capable browser around now. SRWare Iron is essentially Chrome with all the right privacy settings and is good for simple surfing.
You really need a file manager with Win8; I paid for Powerdesk Pro 9, but it wouldn't run; found FreeCommander - nearly as good, free, and it works [update: Powerdesk runs fine under Win8.1, and is significantly better than FreeCommander]. FreeFileSync for rapid and flexible synchronizing of folders; In my experience it sometimes leaves junk folders starting FFS around, which need checking for content before deleting. Copernic for desktop search; At the start of Longhorn, "Where's my stuff?" was Bill Gates' big challenge for the OS that became Vista/7/8 but nothing seems to have happened.
PhraseExpress for keyboard shortcuts/macros/spellchecking/quotes.

This post at Lifehacker is good on alternatives to what comes with the machine.

Making it feel like home (surprisingly important) meant importing the coffee bean .bmp to tile on the desktop - it didn't seem to be on the machine.

Update after initial use of Win8.1

 The update wasnt' an update. It was an App in the Store. Here on Solaris III that wasn't obvious. Bing searches on the MS web site didn't help - had to get google to tell me. Apart from that, painless.
BUT they really want you to be assimilated. Transferring between your MS account and your local account always follows the path of maximum difficulty. For instance, there is no Freecell installed (and the old freecell.exe won't run - Update - solution here). Ah, there is an App; this means lots of going into your MS account and giving permission for it to access all sorts of things, then fighting your way back to a local account where I don't think it works. Far better to download free freecell solitaire from CNET (apparently a better game anyway). Why is is "my documents" but "your account" anyway? Just one of many instances of muddled inconsistency that bureaucracies produce when they don't do user testing.

Win8.1 is definitely an improvement on Win8. Given the outcry against Win8, it is still remarkable that there can have been no proper UX involvement, or just simple user testing before releasing Win8.1, however.

The Start screen is quite nice, and visually better than the old Start menu. BUT when you download applications, be sure to check that the icon is 'pinned to start' and maybe also 'pinned to taskbar'; otherwise you will be rummaging through Program Files. The pinning dialogue does, of course, have some annoying inconsistencies. The tile grid would be a good way to lay out options if MS didn't constrain the layout so much. Start out by getting rid of as much junk as you can.

Even without using the Apps, you are forced to have some un-Appy moments interacting with Win8.1. When in the middle of some mindboggling interaction remember that Esc won't work but that the Windows key gets you to the Start page. RetroUI is less necessary than it would have been with Win8, but may be worth it - I am still considering getting it.

The Start page does not have a search box - you just type and it appears. Some numpty must have thought that was as cool as Cupertino. Ok once you know. BUT it seems to be useless. If you want to know how to fix annoying aspects of Win8.1, google it. So far for me, this has included:
  • Restoring 'confirm' before delete. (hint: wastebasket properties).
  • Getting rid of the obtrusive 'help',which is even worse than Clippy was - at least Clippy didn't take up a quarter of the screen.
  • Moving between MS and local accounts, staying away from Skydrive, getting out of the MS account once forced to be in it.
  • Finding a workaround for the loss of Start - documents; made a desktop shortcut to 'Recent Items'. The MS website proposal didn't match the Win8.1 UI.

"What now? - Oh, that what now..."

Some of the revisionist capitalist-running-dog press with "leaks" of an update to Win8.1 are trying to airbrush what a disaster the Win8/8/1 UI is. I trust they were paid in silver rather than lunch. A group of schoolchildren with a UX project would not try to impose a phone touchscreen interface on a desktop monitor. To be that crass, you need a roomful of balding shouty predatory Silicon Valley business leaders. The penny is starting to drop in terms of updates to unwind this folly.
 Couple this with @tomiahonen's forecast that Nokia/MS/Windows phones are doomed, poor sales of Windows tablets to business, and we need to look elsewhere. The ending of support for XP and Win7 must be alarming a good many organizations. The move to open source formats in the public sector comes just at the wrong time for MS, and a free alternative to MS Office is very appealing in a time of austerity.
Apple and I parted company a long time ago. IMHO Apple without Steve Jobs is on its way to becoming as loved as Adobe (happy to be proved wrong). For business use, Android is a mess. So by default my next tech project is to try Linux - probably LXLE on an old machine. I just don't see the alternative.

Sunday, 7 July 2013

A Human-Centred view of Science

'Purity' is one of my favourite XKCD comics.

It summarizes a particular view of science - that of 'Single vision and Newton's Sleep' from William Blake.

Taking a more human-centred view of science is in line with Aristotle - "The proper study of man is man", or Protagoras' statement that "Man is the measure of all things". Such a view gives us something more like this.

"I maintain that the human mystery is incredibly demeaned by scientific reductionism, with its claim in promissory materialism to account eventually for all of the spiritual world in terms of patterns of neuronal activity. This belief must be classed as a superstition. . . . we have to recognize that we are spiritual beings with souls existing in a spiritual world as well as material beings with bodies and brains existing in a material world."
Sir John Eccles --Evolution of the Brain, Creation of the Self, p. 241

Tuesday, 25 June 2013

Automation and context of use

The Heyns report on Lethal Autonomous Robotics makes a distinction between autonomous and automatic.

“Autonomous” needs to be distinguished from “automatic” or “automated.” Automatic systems, such as household appliances, operate within a structured and predictable environment. Autonomous systems can function in an open environment, under unstructured and dynamic circumstances. As  such their actions (like those of humans) may ultimately be unpredictable, especially in situations as chaotic as  armed conflict, and even more so when they interact with other autonomous systems.

Good point to make. The concept of 'predictability' for autonomous systems has problems when the environment introduces complexity - even for a simple device.

Here is a simple example of simple automation that did not capture the context of use.

Manual tap and soap dispenser.

Automatic tap (sensor under outlet turns on water flow).

See the problem? You reach across for the soap and soak your sleeve.

So - Human Centred Design is necessary even for the simplest of automatic systems. Context is just about everything.