Thursday 27 January 2022

Tools and teams as interaction metaphors

 Kranzberg's First Law (here): “Technology is neither good nor bad; nor is it neutral.” By which he means that:“technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

On Twitter, Ben Shneiderman suggested that folk build tools rather than human-AI teams. My gut instinct was that he was right, but the topic seemed worthy of a quick look. 

A tool may not be human centred; does the tool become an extension of the person (like a violin), or does the person become an extension of the machine (like in a factory). A team approach could be great or could be 'son of Clippy'. Score: Neutral to choice, One to good design practice.

Both tool and team approaches need to monitor the 'affect dilemma' in operation (see Jokers). Another one to good design practice.

'Trust' is an output variable and needs to be understood and measured but finding out how to build human-AI teams needs work on inputs. 'Trusted' and 'trustworthiness' are separate components of trust. It is worth noting that trust is also an issue for tools. Merriam-Webster notes "...the words associated with trusted mostly refer to people, while those most associated with trusty refer to animals, equipment, and tools in addition to people. We therefore say “trusty Swiss Army knife” but never “trusted Swiss Army knife”; its utility and dependability are inherent, not sought, developed, or earned. This distinction is relatively recent; it seems to have settled into its current usage by the 1940s. Shakespeare had used trusty for both meanings (“trusty servant” and “trusty sword” occur in his works), and both Dickens and Conan Doyle used trusty to describe people rather than animals or things. Emily Dickinson used the word to mean something closer to trustworthy or dependable:" Building trustworthy digital team members is non-trivial as Alexa users found out here. Similar contextual limits apply to tools e.g. cockpit automation.

Are 'recommender systems' tools or team members? Maybe there are other categories we need? Most recommenders - and most automation - are 'strong silent automation' (Woods), with all the attendant problems.

People have been making tools for a very long time, and we must have some idea of how to go about it. Building AI team members is still to happen really. So, pragmatically, score 1 to tools. Back in the late 1980's, there was much research activity into Human-Electronic Crew Teamwork (Pilot's Associate etc.), which never materialised in production. At that time,  Jack Shelnutt (so far as I could tell) did careful task tailoring to build tools that looked like they worked. How to design dialogue seems to be an art form that has come and gone e.g. here. Probably another one to tools.

I suspect the idea of automation as a team player was a counter to strong silent automation e.g. here and here to enable coordination between human and automated actions and perceptions of the world. It is not obvious how a tool metaphor could do this. Score one to teams.

The world can only be grasped by action, not by contemplation.  The hand is the cutting edge of the mind.”  Jacob Bronowski. Tools traditionally provide feedback through the control side of the loop. This is under-explored e.g. the H-metaphor here.

Strong silent automation continues under the guise of autonomy - a very strange design intent (e.g. here); to a large extent any metaphor should counter this. Goodrich, on Human Robot Interation here: "One operational characterization of autonomy that applies to mobile robots is the amount of time that a robot can be neglected, or the neglect tolerance of the robot [68]. A system with a high level of autonomy is one that can be neglected for a long period of time without interaction. However, this notion of autonomy does not encompass Turing-type notions of intelligence that might be more applicable to representational or speech-act aspects of autonomy. Autonomy is not an end in itself in the field of HRI, but rather a means to supporting productive interaction. Indeed, autonomy is only useful insofar as it supports beneficial interaction between a human and a robot." Autonomous cars need to interact e.g. with pedestrians here. Are they tools or teams?

An aspect of context of use to be considered in choice of metaphor is dynamic value alignment. There was a discussion in the service design community on 'co-creation' - could an airline booking system detect that the user was booking a holiday rather than a business trip, and automatically adjust trade-offs such as speed vs. cost. In a military situation,values may change rapidly, and 'teamwork' is about recognising this and responding quickly. Value alignment is hard. Automatic dynamic value alignment is a real challenge to automation. IF this can be done, then real teamwork is a possibility.

In conclusion, if a trusty tool metaphor looks like working, that sounds good, unless it introduces strong silent automation. Assistant-type dialogue is still difficult and real teams are still a research project. Good human-centred design practice is needed whatever.