Tuesday 21 September 2021

Engineers and Human Values

 “If it weren't for the people, the god-damn people' said Finnerty, 'always getting tangled up in the machinery. If it weren't for them, the world would be an engineer's paradise.” - Kurt Vonnegut, Player Piano

"Nice thread, but thinking of AI as “user-centered” is a narrow view. Shouldn’t the real goal of AI be to create truly autonomous intelligent beings rather than servants for human purposes? We’re just building smarter screwdrivers today." Ali Minai @barbarikon

The failure of engineers to understand user and stakeholder needs and values is an old problem. From Plato and his Dialogues "The Republic"
[suggestion - read painter / imitator as marketing]:

“Will a painter, say, paint reins and bridle?” “But a saddler and a smith will make them?” “Certainly.”
“Does the painter know what the reins and the bridle ought to be like? Or is it the case that not even the smith and the saddler, who made them, know that, but only the horseman, the man who knows how to use them?” “Very true.”
“And shall we not say the same about everything?” “What?”
“That there are three arts concerned with each thing —one that uses, one that makes, and one that imitates?” “
“Then are the virtue and beauty and correctness of every manufactured article and living creature and action determined by any other consideration than the use for which each is designed by art or nature?” “Then it is quite inevitable that the user of each thing should have most experience of it, and should be the perso.n to inform the maker what are the good and bad points of the instrument as he uses it. For example, the flute-player informs the flute-maker about the flutes which are to serve him in his fluting; he will prescribe how they ought to be made. and the. maker will serve him” “Surely.”
‘ Then he who knows gives information about good and bad flutes, and the other will make them, relying on his statements?” “Yes.”
“Then the maker of any article will have a right belief concerning its beauty or badness, which he derives from his association with the knower, and from listening, as he is compelled to do, to what the knower says; but the user has knowledge?” “Certainly.”

This post is partly in response to a well-considered article on the need for engineers to understand human values and adopt systems thinking here. The concern with the article is that its aspirations are doomed.

 Update: To an extent, it could be considered a diagnosis of the 'Engineer's Disease' discussed here, with differing versions here and here. I was alerted to the disease by Paul Graham Raven with this post.

I have had the pleasure and privilege to work with folk from many different backgrounds who have practiced Human Centred Design (HCD) well. Engineers who 'get' human values and HCD can be powerful forces for good. However, they are the exception that proves the rule. Building artefacts that reflect human values needs multi-disciplinary teamwork if the process is to deliver dependably. The idea that engineers can embrace the consideration of human values as a result of a training course is a doomed hope. This post presents some of the ways in which engineers frequently and persistently fail to consider human values. The logic is that any one of these ways can be sufficient to prevent a system reflecting human values.

Autogamous technology

Gene I Rochlin defined autogamous technology as self-pollinating and self-fertilizing, responding more and more to an inner logic of development than the needs an desires of the user community. The term has not found widespread use. However, the existence of such technology is widespread, perhaps characterized by the Internet Fridge, and the Internet of Shit.  Is it realistic to expect engineers to be able to answer 'Question Zero' here - quite probably not. If not engineers, then who?

Nigel Bevan persuaded the software standards community that the purpose of quality during design was to achieve Quality In Use (QIU).  Why else would anyone build a system? This post does not get to the bottom of that question but provides some pointers as to why building a system that does not reflect human needs and values is routine.

Monastic seclusion

The archetypal approach to engineering is for one or more engineers to work in a lab or garage to bring their creation to life. This is a secluded environment, free of distractions. The Human Centred Design approach is very out-and-about and social, listening to users and stakeholders, trying things out, and working in a multi-disciplinary team (see below). Many engineers (and egotistical industrial designers) treat such an approach with contempt and see it as interfering with real work.

Principles of Human-Centred Design ISO 9241-210:2019
5.2 The design is based upon an explicit understanding of users, tasks and environments
5.3 Users are involved throughout design and development
5.4 The design is driven and refined by user-centred evaluation
5.5 The process is iterative
5.6 The design addresses the whole user experience
5.7 The design team includes multidisciplinary skills and perspectives

In 'What Engineers Know and How They Know It', Walter Vicente says  "artifactual design is a social activity." Chapter 3 of the book gives an account of how flying qualities were re-conceptualized over a ten year period of engineers and pilots working very closely as a team.

In some situations, it is possible for engineers to relate to the user and context of use directly. For example, Toyota engineers:

'As Kousuke Shiramizu, Lexus quality guru and executive vice president explains, “ Engineers who have never set foot in Beverly Hills have no business designing a Lexus.Nor has anybody who has never experienced driving on the Autobahn firsthand.”'

"The story concerns a chief engineer who moved in with a young target family in southern California to enhance his understanding of the generation X lifestyle associated with RAV Four customers. While developing Toyota’s successful 2003 Sienna, the Sienna CE drove his team in Toyota’s previous minivan model more than 50,000 miles across North America through every part of Canada, the United States, and Mexico. The CE experienced a visceral lesson in what is important to the North American minivan driver and discovered in every locale new opportunities for improving the current product. As a result, the Sienna was made big enough to hold full sheets of plywood while the turning radius was tightened, more cupholders were added, and cross-wind stability was enhanced, among many other improvements that resulted from this experience."

Both of the above from 'The Toyota Product Development System' by James M. Morgan and Jeffrey K. Liker

In other situations, the impact of as proposed system on various groups and their context of use may not be intelligible or accessible directly, and a plan of work is required, possibly including the use of resources such as ergonomists or anthropologists.

Engineering values and humanity

Nicholas Carr hits the nail on the head about the values implicit in automation here. "Google’s Android guru, Sundar Pichai, provides a peek into the company’s conception of our automated future:
“Today, computing mainly automates things for you, but when we connect all these things, you can truly start assisting people in a more meaningful way,” Mr. Pichai said. He suggested a way for Android on people’s smartphones to interact with Android in their cars. “If I go and pick up my kids, it would be good for my car to be aware that my kids have entered the car and change the music to something that’s appropriate for them,” Mr. Pichai said.

What’s illuminating is not the triviality of Pichai’s scenario — that billions of dollars might be invested in developing a system that senses when your kids get in your car and then seamlessly cues up “Baby Beluga” — but what the urge to automate small, human interactions reveals about Pichai and his colleagues. With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be “freed up” to do something “more valuable.” Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.

I guess it’s no surprise that what Pichai expresses is a robot’s view of technology in general and automation in particular — mindless, witless, joyless; obsessed with productivity, oblivious to life’s everyday textures and pleasures. But it is telling. What should be automated is not what can be automated but what should be automated
." [emphasis added].

Abeba Birhane et al have ascertained the values implicit in ML here:

"We reject the vague conceptualization of the discipline of ML as value-neutral. Instead, we investigate the ways that the discipline of ML is inherently value-laden. Our analysis of highly influential papers in the discipline finds that they not only favor the needs of research communities and large firms over broader social needs, but also that they take this favoritism for granted. The favoritism manifests in the choice of projects, the lack of consideration of potential negative impacts, and the prioritization and operationalization of values such as performance, generalization, efficiency, and novelty. These values are operationalized in ways that disfavor societal needs, usually without discussion or acknowledgment. Moreover, we uncover an overwhelming and increasing presence of big tech and elite universities in highly cited papers, which is consistent with a system of powercentralizing value-commitments. The upshot is that the discipline of ML is not value-neutral. We find that it is socially and politically loaded, frequently neglecting societal needs and harms, while prioritizing and promoting the concentration of power in the hands of already powerful actors."

User information needs

Bainbridge's Ironies of automation here are still unresolved and the problems of supervisory control frequently unaddressed. Donald Michie wrote about the need for a 'human window' into AI systems in the 1980's. Forty years later, the ML community sees even 'syntactic sugar' (Michie) as an optional research topic. In a sense this is a continuation of the failure-prone 'strong, silent automation' (Woods). Briefly put, engineers left to themselves will continue to ignore user information needs.

Belletristic vs. practical approach to work

Look around design offices or software development offices and examine the books; manuals, catalogues, standards. For all practical purposes you will not find an anthropology journal. Researching human values, societal impact etc. is the bookish sort of activity that design engineers don't do. Engineers also tend to ask how not why.

Stack fallacy

Stack fallacy - here -  is the mistaken belief that it is trivial to build the layer above yours. The Socio-Technical System that an engineered artefact enters may be several layers above the competence of the engineers involved.

"The bottleneck for success often is not knowledge of the tools, but lack of understanding of the customer needs. Database engineers know almost nothing about what supply chain software customers want or need. They can hire for that, but it is not a core competency."

Prometheanism

In 'Technics and Time', Bernard Stiegler says "as a 'process of exteriorization,' technics is the pursuit of life by means other than life"

Adrienne Mayor (here) has shown that the quest to build 'life through craft' - biotechne - goes back at least as far as Classical times, with Talos.

This post is first step over some deep waters. Relevant writers include Romanyshyn, Yuk Hui, Dryzek etc.but the drive to create a machine that is monstrous and to then abdicate responsibility for it (Facebook, Amazon, and others) indicates a deeply-held darkness in our psyche and culture.

Transcendence

David Noble has studied the ways in which religion (forms of Christianity) and technology are intertwined, and examined the religious motivation behind the development of technology.

"When people wonder why the new technologies so rarely seem adequately to meet their human and social needs, they assume it is because of the greed and lust for power that motivate those who design and deploy them. Certainly, this has much to do with it. But it is not
the whole of the story. On a deeper cultural level, these technologies have not met basic human needs because, at bottom, they have never really been about meeting them. They have been aimed rather at the loftier goal of transcending such mortal concerns altogether. In such
an ideological context, inspired more by prophets than by profits, the needs neither of mortals nor of the earth they inhabit are of any enduring consequence. And it is here that the religion of technology can rightly be considered a menace. (Lynn White, for example, long ago identified the ideological roots of the ecological crisis in "the Christian dogma of man's transcendence of, and rightful mastery over, nature"; more recently, the ecologist Philip Regal has likewise traced current justifications of unregulated bioengineering to their source in late-medieval natural theology
.)" (The Religion of Technology, p206- 207)

Featuritis as a substitute for understanding use

"Creativity is not a process...It’s people who care enough to keep thinking about something until they find the simplest way to do it." Tim Cook

“Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.” — Charles Mingus.

 "A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." - Antoine de Saint-Exupery

The problems with simplicity are as follows:

  1. Lots of engineers do not care enough about user need or societal impact to keep thinking about it (cf. Tim Cook).
  2. Lots of engineers want the machine they are bringing to life to be as advanced and complicated as possible.
  3. Finding simplicity that gives a Happy User Peak (Kathy Sierra below) means getting out of the lab and listening to people.
  4. Adding in loads of features means there is bound to be something for everybody (if they can find it}.
  5. More features means you are on the job for longer.

"Creeping featurism ... is the tendency to add to the number of functions that a device can perform, often extending the number beyond all reason." Don Norman. The alternative to simplicity is typified by featuritis - here, here and here (Kathy Sierra). Thomas Landauer wrote 'The Trouble with Computers' in 1995 - here - but the culture has not changed much since.

Systemic vs. systematic thinking

Many engineers are happy doing systematic thinking in the complicated domain (Cynefin), and are unhappy coping with emergence, thinking systemically, working in the complex domain. Notes on the difference here here here. Acting to meet human values requires systemic thinking and many engineers are never going to be up to that. It seems that engineers that don't 'get' complexity are not amenable to change via a short course (or perhaps even lived experience).


[Found on Twitter]


No comments:

Post a Comment