There is much talk about a human-centred approach to AI, and using AI to provide Intelligence Augmentation. For example, this talk contrasts AI as a magic beanstalk vs. AI as a tool for human use.
This post examines the practical likelihood of achieving such aims at any scale, and reviews the forces opposing the adoption of a human-centred approach to automation. Scott Berkun has done us a favour by writing an excellent example of a plea for human-centredness in design - just the sort of thing that has been ignored for decades: "We need to shift how we measure progress away from the potential in a technology and toward what people are actually able to achieve with it.... Everyone, from consumers to programmers to business leaders, must become more educated about what good design really means. For consumers, this isn’t necessarily to become designers themselves, but to become better judges of the true value of things before they buy them. Technologists and businesspeople need to understand the common traps that lead to bad design and do what they can to reduce them. This is often as simple as valuing design experts enough to listen to them at the start of projects when the important decisions are made, rather than at the end when their advice will be far too late."
This post makes the safe assumption that Scott will be ignored, and attempts to probe the how and why of that.
In the beginning..
Nehemiah Jordan worked for RAND Corporation on the SAGE air defence system. In a series of articles in Psych. Rev. (1963), he outlined most of the human problems with introducing automation. He wrote up the lessons learned in the classic book 'Themes in Speculative Psychology' (1968). Two quotes are relevant here - on motivation, and on Allocation of Function (to human or machine).
"In designing a complex man-machine system one should consider the human performance necessary for the system, not only from an instrumental standpoint, but also from a consummatory standpoint that is; how satisfying the job is per se. For jobs to be satisfying three conditions seen to be necessary and sufficient; they must demand of the operator the utilization of skills; they must be meaningful;- and the operator must have real responsibility. It was also asserted that although human factors engineering neglected the consummately standpoint, as long as machines were relatively crude, this neglect was not critical. With the mushrooming development of automation, however, we cannot afford this luxury any more, In designing and thinking about our new complex automated man-machine systems we must take the consummatory standpoint into account, we must learn to design for men jobs that are intrinsically interesting and satisfying."
Allocation of Function
"In other words, to the extent that man becomes comparable to a machine we do not really need him any more since he can be replaced by a machine. This necessary consequence was actually reached, but not recognized, in a later paper, also a fundamental and significant paper in human factor engineering literature. In 1954 Birmingham and Taylor in their paper: ‘A Design Philosophy for Man-Machine Control Systems’, write:‘... speaking mathematically, he (man) is best when doing least’ [1, p. 1752]. The conclusion is inescapable - design the man out of the system. If he does best when he does least, the least he can do is zero. But then the conclusion is also ridiculous....
I suggest that ‘complementary’ is probably the correct concept to use in discussing the allocation of tasks to men and to machines. Rather than compare men and machines as to which is better for getting a task done, let us think about how we complement men by machines, and vice versa, to get a task done.
As soon as we start to think this way, we find that we have to start thinking differently. The term ‘allocation of tasks to men and machines’ becomes meaningless. Rather we are forced to think about a task that can be done by men and machines. The concept ‘task’ ceases to be the smallest unit of analysis for designing man-machine systems, though still remaining the basic unit in terms of which the analysis makes sense. The task now consists of actions, or better still activities, which have to be shared by men and machines. There is nothing strange about this. In industrial chemistry the molecule is the fundamental unit for many purposes and it doesn’t disturb anybody that some of these molecules consist of hundreds, if not thousands, of atoms. The analysis of man-machine systems should therefore consist of specifications of tasks and activities necessary to accomplish the tasks. Man and machine should complement each other in getting these activities done in order to accomplish the task.
It is possible that with a shift to emphasizing man-machine comparability, new formats for system analysis and design will have to be developed, and these formats may pose a problem. I am convinced, however, that as soon as we begin thinking in proper units, this problem will be solved with relative ease. Regardless of whether this is so, one can now already specify several general principles that may serve as basic guidelines for complementing men and machines."
John Allspaw has a thread on Fitts List and the un-Fitts List here.
From the outset, we knew that the design of automation should follow from the design of jobs. Simplistically, a Plan-Design-Check-Act (PDCA) cycle for job and organization design drives a PDCA cycle for automation. We also knew not to do 'job design by left-overs' i.e. automate that which is easy to automate, and leave people to do the rest.
As you will be aware, this is not what has happened.
Why is human-centred automation so rare compared to human replacement automation?
Chris Boorman (@CHBoorman) - in a long-gone blog post - contrasted
cost-reduction human replacement automation with human centred automation: "Automation is an essential capability for enterprises seeking to
innovate – whether through internal channels, acquisition or partnership. Gartner has previously stated that for many organizations
80% of time can be spent on day-to-day processes, or ‘keeping the lights on’ and this is not sustainable if they are to continue to win market
share and grow in increasingly competitive markets.
Automation enables enterprises to automate those core processes not to make cuts, but to free up resource to work on new disruptive projects. Faced with an increasingly complex world of technology - cloud, mobile, big data, internet of things - as well as growing consumer expectations, every business needs to turn to automation or perish.
Automation needs to be ingrained in an organization’s DNA early on and not deployed later as a replacement measure for existing job functions. It should instead be used to allow people and resources to be more focused on driving the business forwards, rather than on just keeping the lights on.
Every industry is going through a period of change as new technologies and new entrants look to disrupt the status-quo. Automation is a key enabler for helping enterprises to disrupt their own industries and drive that change. Acquiring new customers, retaining customers, driving business analytics, consolidating enterprises following mergers or driving agility and speed are all critical business imperatives. Automation delivers the efficiency and enables the new way of thinking from your brightest talent to succeed."
Prefix capitalism has devised the worst of both worlds with pre-automation:
"We define pre-automation as the coincident, strategic effort to scale
a workforce and monopolize a distribution network via platform while simultaneously investing in its automated replacement."
Frank Pasquale puts it this way: "All too often, the automation literature is focused on replacing humans, rather than respecting their hopes, duties, and aspirations. A central task of educators, managers, and business leaders should be finding ways to complement a workforce’s existing skills, rather than sweeping that workforce aside. That does not simply mean creating workers with skill sets that better “plug into” the needs of machines, but also, doing the opposite: creating machines that better enhance and respect the abilities and needs of workers. That would be a “machine age” welcoming for all, rather than one calibrated to reflect and extend the power of machine owners."
Well-run organizations with a human-centred approach e.g. using Henry Stewart's Happy Manifesto or ISO 27500:2016 would have no great problem with human-centred automation. Similarly, proper Lean organizations such as Toyota. However, such organizations are rare and against the grain. Theory Y is rare compared to Theory X in practice. Bullshit Jobs (Graeber) are everywhere, and organizations seem to have adopted The Gervais Principle (Rao) as a manual. In developing ISO TS 18152 we found that to link job design and automation took a ton of activities at all levels of management, and at all stages of the lifecycle. Current organizations and project structures really do not do human-centredness unless forced to.Hostile business models have more or less stopped any chance of positive User Experience (UX), as noted by Mark Hurst here. Prefix Capitalism (Tante) is propagating Chickenized Reverse Centaurs (Cory Doctorow) https://pluralistic.net/2021/03/19/the-shakedown/#weird-flex , shitty automation, the surveillance panopticon, with added ethicswashing. A human-centred approach to the financialised world would include the challenging task of supporting 'investee activism' (Feher) and 'arts of doing' (De Certeau).
Globalization and expansion to society level
Automation has extended to a global level, interacting with society as a
whole (e.g. Facebook algorithms, where user issues include privacy and identity - a long way from issues of numbers of mouse clicks). This is
being addressed as a battle of words between The Lords of the Valley and elected politicians. Going swimmingly. The European Union seems to be the
regulator for Silicon Valley, but the focus is on software and data. The reaction by Google and others to the proposed EC AI Regulation more or
less demonstrates its necessity. The EC proposed Regulation addresses important risks, but does not attempt to meet the stated aim of being
human-centric. Niels Bjorn- Andersen (1985) raised the question of “whether
all our (the HF community) intellectual capacity, energy and other precious resources are being utilized to:
- Soften the technology to make it more compatible with human beings (through removing the flicker in order not to damage the eyes, detaching the keyboard in order not to damage the back of the operator, making it so easy to use that “even a child or a mentaly retarded person can use it” etc.) and in this way provide a sugar coating on the pill so that it may be swallowed more easily, or whether
- we are genuinely contributing to the attainment of true human values.”
(Bjorn- Andersen, N. ‘Are “Human Factors” human?’, Contribution to Man Machine Integration, State of the Art Report, Pergamon Infotec, Jan 1985.)
In contrast, the Principles of Human Centred Design (ISO 9241-210:2019) are:
- The design is based upon an explicit understanding of users, tasks and environments
- Users are involved throughout design and development
- The design is driven and refined by user-centred evaluation
- The process is iterative
- The design addresses the whole user experience
- The design team includes multidisciplinary skills and perspectives
At a society level, the analysis of a potential 'robot takeover' is being done in a top down manner by *economists* using a watered-down version of Fitts List, and Human Replacement Automation. What could possibly go wrong? (A succinct thoughtful analysis of jobs and automation is provided by Benanav).
The relationship between people and nature has lost much in the change from 'indigenous' to 'urban'. This piece uses 'human-centred' in a valid accusatory manner. The defence of human-centredness would be to say that the design intent of suburban life being criticised is 'less-than-human centred' and that the relationship with nature is a part of human-centredness. However, it would be hard to find examples in practice so labelled - a hypothetical defence using a possible future human-centredness.
State of human-centredness and AI / ML
Some sectors have taken a human-centred approach to AI/ML in their sector:
Autonomous Urbanism and NACTO "The cautious optimism that characterized the first edition of the Blueprint for Autonomous Urbanism, published in 2017, has been tempered by recognition of the enormity of the policy foundation that must be laid for us to reach a human-focused autonomous future. Like the first Blueprint, this edition lays out a vision for how autonomous vehicles, and technology more broadly, can work in service of safe, sustainable, equitable, vibrant cities. This vision builds on and reinforces the past decade of transformative city transportation practice. It prioritizes people walking, biking, rolling, and taking transit, putting people at the center of urban life and street design, while taking advantage of new technologies in order to reduce carbon emissions, decrease traffic fatalities, and increase economic opportunities....Automation without a comprehensive overhaul of how our streets are designed, allocated, and shared will not result in substantive safety, sustainability, or equity gains. By implementing proactive policies today, cities can act to ensure that the adoption of AV technologies improves transportation outcomes rather than leading to an overall increase in driving."
The American Medical Association has a policy: "Our AMA advocates that:
- AI is designed to enhance human intelligence and the patient-physician relationship rather than replace it Oversight and regulation of health care AI systems must be based on risk of harm and benefit accounting for a host of factors, including but not limited to: intended and reasonably expected use(s); evidence of safety, efficacy and equity, including addressing bias; AI system methods; level of automation; transparency; and conditions of deployment
- Payment and coverage for all health care AI systems must be conditioned on complying with all appropriate federal and state laws and regulations, including but not limited to those governing patient safety, efficacy, equity, truthful claims, privacy and security, as well as state medical practice and licensure laws
- Payment and coverage for health care AI systems intended for clinical care must be conditioned on•Clinical validation•Alignment with clinical decision-making that is familiar to physicians•High-quality clinical evidence
- Payment and coverage for health care AI systems must •Be informed by real-world workflow and human-centered design principles•Enable physicians to prepare for and transition to new care delivery models•Support effective communication and engagement between patients, physicians and the health care team•Seamlessly integrate clinical, administrative and population health management functions into workflow•Seek end-user feedback to support iterative product improvement
- Payment and coverage policies must advance affordability and access to AI systems that are designed for small physician practices and patients and not limited to large practices and institutions
- Government-conferred exclusivities and intellectual property laws are meant to foster innovation, but constitute interventions into the free market, and therefore should be appropriately balanced with the need for competition, access and affordability."
While welcome, the state of such initiatives is orders of magnitude less than what is needed - even within healthcare AI. The state of ML in healthcare seems pretty much GIGO, see here and here and here and here and here . Also, this paper on the myth of generalisability in ML would have been transformed by a modicum of understanding of 'context of use' and 'Quality In Use'.
In the context of 'killer robots', there are no abstracts on "meaningful human control" (as of 04 May 2021) in psyarxiv and 2 in CS arxiv - one of which is relevant.
More generally, a search of Arxiv CS (27/3/2021) revealed 3573 refs to "gradient descent" (as a baseline), 13 refs to "hybrid intelligence", 3 refs to "augmented intelligence", 3 to Licklider, 0 to Engelbart. A search of Psyarxiv showed 0 refs to "augmented intelligence" and 1 ref to "hybrid intelligence".
While there is good work going on, it is not moving the needle at all. Alan Winfield has summarised his situation here: "We roboticists used to justifiably claim that robots would do jobs that are too dull, dirty and dangerous for humans. It is now clear that working as human assistants to robots and AIs in the 21st century is dull, and both physically and/or psychologically dangerous. One of the foundational promises of robotics has been broken. This makes me sad, and very angry."
The 'think like a Centaur' work at OIO on Roby and its successors is the exception that proves the rule.
There has been a line of work looking at the Human Factors of automation (e.g. Bainbridge's Ironies of Automation), characterized by good technical quality and massive lack of impact. Nearly all automated systems still make the same well-documented mistakes first noted by Jordan. At a practical level, these adverse consequences of poor automation can normally be addressed by mainstream risk / issue management. This very rarely happens. Indeed, it seems harder to introduce usable technology now that it was in the past. The between technical activity and concern for people seems deeply embedded and hard to bridge. The problems of automation and algorithms are not new or transitory. Very likely they go back to the beginnings of labour, capital, and debt (e.g. when storing grain became possible).
The Western capitalist hegemony is deeply antithetical to human-centredness (remember that the subtitle of 'Small is Beautiful' was 'Economics as if people mattered' - hardly the Amazon corporate handbook), from the level of a corporate project through to societal effects. Competent practitioners with good stakeholder support can show what can be done, but Human Centred Design will remain a niche activity. If human-centredness is to make any impact at all, then it is time for some completely fresh approaches. Fortunately, the time is ripe for just such fresh approaches but the scale of the opportunity is somewhat daunting.
In conclusion, this Arthur C. Clarke quote on automation and jobs from 1969:
GENE: But you see the average person doesn’t see it. All he sees is that
he’s going to be replaced by a computer, reduced to an IBM card and filed away.
CLARKE: The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.
GENE: Precisely. Now, we feel that if only this idea had come across in “2001,” instead of depicting machines as ominous and destructive. . .
CLARKE: But it would have been another film. Be thankful for what you’ve got. Maybe Stanley wasn’t interested in making that kind of film.