Thursday 7 July 2022

Myths and Narratives about "sustainability" in the Holocene

 "Only to the white man was nature a "wilderness" and only to him was the land "infested" with "wild" animals and "savage" people. To us it was tame. Earth was bountiful and we were surrounded with the blessings of the Great Mystery". - Luther Standing Bear

"We abuse land because we regard it as a commodity belonging to us. When we see land as a community to which we belong, we may begin to use it with love and respect.” Aldo Leopold

"Democracy can also be subverted more thoroughly through the products of science than any pre-industrial demagogue ever dreamed" Carl Sagan

"The critical thing about the design process is to identify your scarcest resource ...  You have to make sure your whole team understands what scarce resource you’re optimizing." Fred Brooks

This note summarises some of the main myths and narratives about "sustainability" that have been promoted in the Holocene. It is strange that such a summary has not turned up in the course of investigations -possibly an indicator that Western society is not able to hold a discussion on the alternatives. For example, this book contrasts fossil fuel and industrial scale renewables as both unviable and notes the limits to current debate: "Renewable energy is not the solution we think it is. We have inherited the bad/good energy dichotomy of fossil fuels versus renewable energy, a holdover from the environmental movement of the 1970s that is misleading, if not false...By highlighting the myths surrounding renewable energy, we also create the groundwork for greater environmental considerations and the enactment of radical ecological alternatives that address the roots of consumer society and its marketed solutions."

A taxonomy notable for its rarity is Steve Fuller's Upwing/Downwing Black/Green. "UpWingers (or “Blacks”), above all, anticipate futures of greater energy consumption.They tend towards technological solutionism, their view of the future is in the accelerationism/singularitarian spectrum. Politically, UpWingers tend to follow the American Right’s libertarian view of freedom, and the Left’s view of transcendent humanity. Human potential is unlimited and chaos can be tamed. UpWingers might wave away DownWing concerns as being surmountable. Black is the sky.
DownWingers (or “Greens”), broadly, anticipate futures of reduced energy consumption (through efficiency or destruction, if you’d like). They tend towards localization/resilience thought, their view of the future can range from declinist to hackstability (and even accelerationist in some respects). Politically, DownWingers tend to follow the Left’s view of communitarianism and the Right’s sense of natural order. Human nature is limited and chaos should be avoided. DownWingers might accuse UpWingers as hand-waving away complex problems with the dismissive answer, “We’ll think of something.” Green is the Earth

A book that sounds worth reading is: The Wizard and the Prophet: Two Remarkable Scientists and Their Dueling Visions to Shape Tomorrow's World by Charles C. Mann "The Prophets, he explains, follow William Vogt, a founding environmentalist who believed that in using more than our planet has to give, our prosperity will lead us to ruin. Cut back! was his mantra. Otherwise everyone will lose! The Wizards are the heirs of Norman Borlaug, whose research, in effect, wrangled the world in service to our species to produce modern high-yield crops that then saved millions from starvation. Innovate! was Borlaug's cry. Only in that way can everyone win!"

A"Dryzek-style classification of climate change denial" is here.

Psychology of environmentalism

The psychology of environmentalism is discussed in a penetrating video here.The first 30 minutes are particularly relevant. Specific points of interest to this post are:

06:40 shame and guilt

07:10 apocalyptic environmentalism and depression

07:50 CBT elements and environmentalism

11:58 apocalyptic environmentalism is against solutions that work 13:30 original sin, death of god,

17:40 Jung - not smart enough to create our own values 18:30 guilt at privilege as part of existential burden

20:0 opposed to solutions - destroy the whole system 21:50 solving the problem gets in the way of the alarmism - purpose of alarmism 22:30 alarmism is the goal - JP on value hierarchy 24:21 say environment highest value 24:40 - you cannot fight environment and capitalism at the same time 25:00 MS on insincerity in saving nature; the goal is power itself ('The Great Mother' Eric Neumann 26:30)

27:MS all the optimism ecotopia has gone e.g. ewok village just apocalyptic environmentalism remains.

This places apocalyptic environmentalism as a movement to keep us scared and passive. As such, it is not alone e.g. here.

Finding a way ahead

Of interest in the narratives below are the values and roles assigned to people, technology, and nature. Any assumed (or claimed) universality and/or context-sensitivity is also of interest. The hope is to extract some material of relevance to the Anthropocene. The difficulty of finding any way ahead is rarely acknowledged; Gail Tverberg is a notable exception. Judith Curry has a good summary of the climate narrative. Finding a way ahead is not helped by media polarization of discussion.

There is a line of argument that says that civilization as defined here is inherently unsustainable. See here.

David Wengrow has pointed out that erroneous myths of our past colour our view of the future here and that the present time could be considered an opportunistic moment. Recent cheap money and cheap energy have contributed to us arriving at the strange place we are in. See here.

We are also likely to need some imaginative futurism,  on the lines of Peter Frase's Four Futures

Three outstanding books covering this topic are worthy of note here, and their surface has hardly been scraped (though there is much fine writing within specific approaches):

  • Lean Logic by David Fleming - online here (h/t Stranger)
  • The Development Dictionary edited by Wolfgang Sachs
  • The Great Re-think by Colin Tudge here.

The Blue Marble Evaluation network here is also worth special consideration, e.g. this report. The Orgrad A-Z of thinkers here demonstrates the range and depth of thought being ignored in dominant narratives.

Population, demographics, state of resources

Some resources for Context of Use analysis at this level: The 2 part video by  Clint Laurent and Tony Nash of Complete Intelligence here and here on demographics is well worth the time. This analysis of future energy needs complements the demographics. This analysis of copper supply may also be relevant to any proposed electric future.

Fungibility of "resources"

The treatment of Natural Capital .pdf (John O'Neill) highlights claims that distinguish the folk at Davos - especially The Capitals Approach - from indigenous tribal leaders:"I argue that the concepts of natural capital and ecosystem services cannot capture all the dimensions of value that are central to human well-being. "

1. Natural assets and ecosystem services: A basic defining claim that all accounts of natural capital share is that environmental goods, such as wetlands, woodlands and other sites of biodiversity, should be understood as assets that provide benefit streams—ecosystem services—for human well-being.

2. Compensation and substitutability: A second claim concerns substitutability: that losses in one component of capital can be substituted by gains in another, so long as the services they provide maintain or improve well-being.

3. Monetisation: A third claim is that the assets that make up natural capital can and should be assigned a monetary value.

 4. Marketisation: A fourth claim is that markets in environmental goods provide the most efficient and effective way of achieving the aim of no net loss in natural capital.

5. Financialisation: A final stronger claim is that environmental goods can be protected by treating them as financial assets.

I am unconvinced that any of the fungibility above is justified other than by limiting the discussion to the limits of economics - i.e. entirely self-serving by that group. David Graeber has pointed out the limits of 'value' in economics. Converting 'natural capital' to 'value' is crass. Some quotes from here h/t Jan Hoglund:  "Economics…is about predicting individual behavior; anthropology, about understanding collective differences. …efforts to bring maximizing models into anthropology always end up stumbling into the same sort of incredibly complicated dead ends....All they [maximizing models] really add to analysis is a set of assumptions about human nature. ...The assumption, most of all, that no one ever does anything primarily out of concern for others; that whatever one does, one is only trying to get something out of it for oneself. ...In common English, there is a word for this attitude. It’s called “cynicism.” Most of us try to avoid people who take it too much to heart. In economics, apparently, they call it “science.”…economic anthropologists do have to talk about values. But…they have to talk about them in a rather peculiar way. …what one is really doing is taking an abstraction…and reifying it, treating it as an object…What economic theory ultimately tries to do is to explain all human behavior—all human behavior it considers worth explaining, anyway—on the basis of a certain notion of desire, which then in turn is premised on a certain notion of pleasure."

"The commodification of the commons will represent the greatest, and most cunning, coup d’état in the history of corporate dominance – an extraordinary fait accompli of unparalleled scale, with unimaginable repercussions for humanity and all life." from here.

"Biodiversity offsetting is a regulatory and planning system to ensure that a project with unavoidable negative biodiversity effects requires, as a last resort, carrying out additional measures to compensate these effects. Such biodiversity enhancing compensation measures can be nature based solutions and can include for instance measures to fulfil the remediation obligation under the Environmental Liability Directive(4) or to compensate for damage caused by plans or projects in Natura 2000 sites." From here.

The use of offsets needs careful scrutiny if it is to be in any way acceptable morally. Nice example here.
"\u221e @hdevalence
Dear Team - Many of you have read some concerning stories about our user tracking. While media characterizations aren’t entirely accurate, we are listening and learning. Today, I can share some exciting news: we’ve committed to purchasing ethics offsets, to be net ethical by 2030
3:44 AM · Jun 7, 2021"

Our intuitions about environmental damage or energy use are likely to be faulty. Analytical approaches end up in a battle of externalities. So how do we choose between e.g. a coated cardboard milk carton and a reusable glass bottle? There will need to be some principles and morality to bound the process. Some principles are here .

Sample of Narratives

The list below is draft and incomplete, but I couldn't find one elsewhere.

Technocratic totalitarianism - managerialism

The alphabet soup of WEF, UN, SDG, ESG, Net Zero, Green Deals, a 'Green Growth Accelerator',  etc. This article on Mark Carney sums up the corporatist green narrative, and its financial manoeuvrings are outlined here. The underlying narrative is discussed as doctrine here. If you think the "Deals" are about de-growth, think again and again. In the UK, Net Zero has been 'costed' with an undisclosed dog-ate-my-homework spreadsheet. This piece and its links show that ESG is intellectually bankrupt. See also 'the trillion dollar fantasy' here. This is not about saving the planet. The process of capturing politicians etc. is set out here. The Great Reset is discussed strategically here.

 "The interesting thing about the Green New Deal is that it wasn’t originally a climate thing at all." --  "Do you guys think of it as a climate thing?" -- "Because we really think of it as a how-do-you-change-the-entire-economy thing.” —  Saikat Chakrabarti, former chief of staff to Rep. Alexandria Ocasio-Cortez (D-N.Y. 14th District) 

"No matter if the science of global warming is all phony…climate change provides the greatest opportunity to bring about justice and equality in the world." — Former Canadian Minister of the Environment, Christine Stewart

"The challenge I think we have is for some reason climate change has become a religion -- a politically induced religion instead of science fact that now we have to embrace and move forward on." — Former EPA Administrator Gina McCarthy

The dark side of the corporatist approach, such as Fortress Conservation, is discussed here and here. Steven Corry: "No. "Wild" is a word in the English language, and it's used by the movement to mean "untrammeled by man", which is the definition in the U.S. Wilderness Act (1964). The idea starts in 19th century USA and it's profoundly wrong."  Iain Provan has discussed these 'convenient myths'. See also this and this.

The myth that human-produced CO2 emissions could lead to catastrophic global warming is still at the heart of the financial system being imposed, despite changing terms to e.g. climate emergency.

There is the hope that the WEF campaign has been killed by Covid - being the old normal. "The Davos crowd seek quick fixes, takeaways, action points and deliverables, rather than dwelling on the thoroughly uncomfortable reality of our condition, for fear of going into depression or becoming paralysed by inertia. The sooner that is ditched, the better....An encouraging number of business and political leaders worldwide are busy trying to figure out how to convince their respective audiences that their corporation, their institution, their political party or their government have understood that ‘going back to normal’ is not an option. It’s far from clear for many of them how they will prove that they have gotten the proverbial memo. But there is a very simple way to show that they haven’t. And that would be to go back to Davos."

People and the environment are treated as fungible commodities - standing resources to be exploited. Technology will be used by the elite to maintain control (Authoritarian technics rather than democratic technics in Mumford's terminology).

The OECD well-being lens here looks appealing but is likely to encounter difficulties in the context of WEF managerialism.

National Development Strategy with human-machine-ecological deep growth

A New Way Forward by UNDP and Dark Mountain here is a fairly comprehensive approach to the political and economic changes required for a more sustainable future. If it were readily feasible, we would not be in the mess we are in. The moral / spiritual approach is not really addressed. The use of the doughnut model (below) would indicate considerable allowance for fungibility.

Human-machine-ecological deep growth is:

  • Growth that accounts for all the negative externalities that result from the economic activity causing that growth;
  • Growth that is sustainable to the ecological, human and machine systems from which it draws inputs and to which it contributes;
  • Growth that maximises the potential of those systems by regenerating and augmenting them;
  • Growth that is the result of a regenerative economy, which is not only extracting natural resources, but maintains the natural ecosystem in which society is embedded and helps it thrive;
  • Growth that supports the development of foundational antifragility;
  • Growth that focuses on developing 21st century human, machine and ecological capabilities;
  • Growth that shifts the aim from a winner takes all mentality in structures that hitherto had defined parameters and goals and a foreseeable set of variables, to one where success in an uncertain and interconnected world is assessed on mutual advancement, self-sufficiency and maintenance. In other words, growth that focuses on infinite games instead of finite games

The Good Life - Sufficientianism

One could start this story with 'The Acquisitive Society' by Tawney - a damning view from a different time. Proponents of sufficientianism include: Ivan Illich e.g. on transport,Vaclav Smil, E.F Schumacher, Wendell Berry (farming, technology), Low Tech Magazine, Low Tech Webring, Parrique, Hickel, Slowdown (Dan Hill),Transition Towns / permaculture, Self-Sufficiency, agroforestry (pdf), Michel Bauwens' Cosmo-Local production, Human Scale (Kirkpatrick Sale),  Doughnut economics (Raworth), Meeting human needs (JefimVogel) Universal Basic Everything (Tessy Britton), Climate resilient cities (Eliason), Traditionalism (Wrath of Gnon) Distributism (here). Here is the scythe once again beating the strimmer. Bottom-up sensible farming, agroforestry e.g. here. Some of these are pragmatic, some tied to 'emissions'. Much of this literature preaches smallness and decentralisation. Kirkpatrick Sale is perhaps the most forceful, with the Beanstalk Principle, which is that "for every animal, object, institution, or system, there is an optimal limit beyond which is ought not to grow" and the Beanstalk Corollary, "Beyond this optimum size, all other elements of an animal, object, institution, or system will be adversely affected."

Thrive! by John Thackara and his 5% energy future fits here. There is a wiki to collect case studies.Also The Commoner’s Catalog for Changemaking here.

"Our focus should be services and infrastructures that require five per cent of the energy throughputs that we are accustomed to now. That’s the energy regime we’re likely to end up with, so why not work on that basis from now on?

Is five per cent impossible? On the contrary: For eighty per cent of the world’s population, five per cent energy is their lived reality today. Their situation is usually described as poverty, or a lack of development, but there are numerous ways in which the South’s five percent delivers the same value as our 100-per-cent-and-rising

This set of definitions of 'green growth' assembled by Timothée Parrique is notable for the absence of the word 'emissions'.

Green Growth Definitions (Parrique)

The issues here are:

  1. TPTB don't want us to be self-sufficient - they need us to be dependent. "The thing that really contradicts Communism is not Capitalism, but a small property as it exists for a small farmer or a small shop-keeper." G.K. Chesterton (see Distributism)
  2. The climate alarmists are not interested in practical solutions. It is likely that the alarmist establishment would use 'political technology' to shut them down. See Orlov here.
  3. What is 'sufficient' in Torbay might be considered 'excessive' in most of the world. Branko Milanowich on the feasibility of reducing inequality e.g. here has been the subject of debate (e.g. Hickel and Parrique).  Not easily addressed. The comments on this piece about SER illustrate how many people would not accept "enough is as good as a feast". 'A Treasury official at one of the early meetings responded, “Now I see what sustainability means. It means going back to live in caves. And that’s what you’re all about, isn’t it?”' (from Jackson here)
  4. Linked to 2 above: Eco-sufficiency and distributive justice (sufficientarianism) are not the same, and the differences need resolving. This piece uses sufficientarianism incorrectly without apology. Kanschik here. "The notion of sufficiency has recently seen some momentum in separate discourses in distributive justice (‘sufficientarianism’) and environmental discourse (‘eco-sufficiency’). The examination of their relationship is due, as their scope is overlapping in areas such as environmental justice and socio-economic policy. This paper argues that the two understandings of sufficiency are incompatible because eco-sufficiency takes an extreme perfectionist view on the good life while sufficientarianism is committed to pluralism. A plausible explanation for this incompatibility relates to two different meanings of the term sufficiency as a limit (eco-sufficiency) and a minimum requirement (sufficientarianism)."
  5. Given the secular state of Western society, the spiritual aspects will be hard to address. Tim Jackson has discussed this in 'Consumerism as Theodicy' here
  6. The right idea at the wrong time gets ignored - like the Chapelon Pacific locomotive here.
  7. The track record is dreadful. In 'The Enchantments of Mammon', Eugene McCarraher has a chapter on 'Small is Beautiful'. The exuberance of the writing makes for an enjoyable read, but the litany of failure is tragic.
  8. It is not clear that sufficientarian communities would survive socially in the face of hardship, and could face adverse consequences from eco-gentrification. Orlov talks about community organising here based on hard experience. My own, more flippant, take is here.

Restore the Soul of the World

The spirit of 'Hamlet's Mill' lives on, restoring the harmony of the spheres, based on ancient wisdom, Pythagorean thinking, and a world based on cosmological harmony  (e.g.  Robin and Richard Heath on the evolution of metrology, and John Michell on its spiritual import). It would seem to be a clear winner in ecological terms. The harmony of the spheres, and whole number ratios continue to be relevant - in 'why phi' climate modelling here.

Cosmopolis (David Fideler here) sets out the approach.

The Roman Empire destroyed it first time round, and the Cartesian mechanistic Enlightenment killed the Renaissance. The current imperialists will be just as ill-disposed to 'make geometry not war'. In addition, the established religions will not appreciate having their feet of clay pointed out.

Gaia (Kit Pedler, James Lovelock, Lynn Margulis)

Gaia is the antithesis to the anthropocene; nature is in charge not people. Presumably this is not an excuse for us to behave recklessly, and there is an assumption that rich biodiversity is good for the ecosystem. However, context-specific guidance seems thin on the ground.

"We people are just like our planetmates. We cannot put an end to nature; we can only pose a threat to ourselves. The notion that we can destroy all life, including bacteria thriving in the water tanks of nuclear power plants or boiling hot vents, is ludicrous. I hear our nonhuman brethren snickering: 'Got along without you before I met you, gonna get along without you now,' they sing about us in harmony." Lynne Margulis, The Symbiotic Planet.

In much the same way as the malignant cells of cancer invade and destroy the normal tissue of the body, so do the affairs and processes of the toymaker technocrats invade and destroy the balanced and stable earth organism”. Kit Pedler, The Quest for Gaia: A Book of Changes

Ecological Economics (Evonomics), Prosocial regeneration

Extended quote from here: "Lisi Krall: Ecological economics basically derives from the basic idea that the Earth is a subsystem of the biosphere and therefore some attention has to be paid to how big this economic system can be. So that’s kind of the starting point. Ecological Economics has gone in two different directions — there are two branches. One is this eco sphere studies branch of ecological economics, and that branch is sort of associated with putting prices on things that aren’t priced in the economy. That’s entirely what it’s about. And it is hardly discernible from standard orthodox economics. It’s the study of externality, public goods, and that sort of thing. There’s really no difference. The other branch of ecological economics, which is the more revolutionary branch, is the branch that talks about the issue of scale. That branch has been very good in talking about the need to limit or end economic growth. But in the conversations about how we might do that — and in particular dealing directly with the problem of whether or not you can have a capitalist system that doesn’t grow — I think that’s where that branch of ecological economics has not been as clear as it needs to be.

So this kind of helps us transition into something that you talk about: ultrasociality. Can you first explain ultrasociality as a concept within the more-than-human world, within animals or insects. What is it in the more ecological sense?

First of all let me just say this that I don’t think that there is an agreement about the definition of ultrasociality, either on the part of evolutionary biologists, or on the part of anthropologists and economists like myself. So I think that it is word that’s used by different people to describe different things in the broader sense. I think it refers to complex societies that have highly articulated divisions of labor and develop into large scale — essentially city states, and practice agriculture. That’s the definition that’s used in our work, the work that I’ve done with John Gowdy. We have adopted that definition. And so ultrasociality I would say is a term that has meaning other than in human societies. To talk about those kinds of societies that occur mostly in other than humans: in ants and termites that practice agriculture

...And from here "When seen in this light, the economy is entirely self-subsistent, whose workings are understandable quite independently of society or the political system. ...Once we abandon the circular flow framework, however, and recognize the economies are embedded in value-laden societies, our values come to play a central role in understanding the purpose of economics. Just as societies are human constructs that are meant to serve the individual and collective needs of their members, so economies should serve these needs as well. In this light, economics can be reconceived as the discipline that explores how resources, goods and services can be mobilized in the pursuit of wellbeing in thriving societies, now and in the future."

A more general discussion of society as an organism, and its implications for economics is here.

Prosocial (see here) is a change method based on evolution.


It looks like accounting has been instrumental in much of the damage to the ecosphere. Fungibility of resources allows for tricks that mask damage.  There are movements to produce accounting that is less damaging. They look promising (of course). Any assessment of their feasibility or potential impact is beyond me. Examples include:
  • Long-termism - intergenerational accounting here and here.
  • Commodity based currency, social currency (Chris Cook) here and here.
  • Commons accounting here and here.


Solarpunk is a literary movement (here), and thus under no obligation to produce costed transition plans, planetary impact assessments. Of course, nobody else does these, and so it is not surprising that Solarpunk has found practical application e.g. here, and in the Indian Swadeshi movement.


'More from Less' (McAfee), 'Apocalypse Never' (Shellenberger), 'Golden Age' (Scott Adams), the 'Abundance Manifesto' (Wood), all argue that prosperity and technical advance are good for the planet as well as for people. The optimism contrasts so strongly with the wave of apocalyptic noise, I guess it gets labelled "if it sounds too good to be true, it probably is".

The climate alarmists are not interested in practical solutions. It is likely that the alarmist establishment would use 'political technology' to shut them down. See Orlov here.

Return to our roots - Indigenous wisdom

There is much truth in this approach (e.g. forest gardens) but also some myths (Iain Provan here and here). Persuading Westerners to live like "natives" won't be easy.


Not surprisingly there is much talk of ecological collapse, Malthusian ecofascism etc. If that is a threat that needs to be addressed now, then the case needs to be made, and we need to recognise that 'middle class gardening clubs' won't survive. The hype from the climate alarmists will make it difficult to be heard.  Perhaps best known is Tainter's approach to collapse caused by complexity e.g. here,

Gail Tverberg sets out the scene for a coming collapse based on increasing complexity and lack of energy here.

Dmitry Orlov has written on communities that survive e.g. here and here.

Ecosophy, Deep Ecology, Three ecologies (Guattari, Naess)

Wikipedia on Ecosophy here says: "Guattari holds that traditional environmentalist perspectives obscure the complexity of the relationship between humans and their natural environment through their maintenance of the dualistic separation of human (cultural) and nonhuman (natural) systems; he envisions ecosophy as a new field with a monistic and pluralistic approach to such study. Ecology in the Guattarian sense, then, is a study of complex phenomena, including human subjectivity, the environment, and social relations, all of which are intimately interconnected."

From here, we learn that "The concept of the three ecologies; three interconnected networks existing at the scales of mind, society and the environment, was originally formulated by influential theorist Gregory Bateson in Steps to An Ecology of Mind, however Guattari seeks to elaborate and refine the concept in more detail, while additionally adding a more radical form of poststructuralist Marxism to Bateson’s ecological system.

Pre-empting the global networks of power and resistance described by Hardt and Negri in Empire and Multitude, Guattari argues that ‘The only true response to the ecological crisis is on a global scale, provided that it brings about an authentic political, social and cultural revolution, reshaping the objectives of the production of both material and immaterial assets.’

The Rizoma Field School here is based on ideas from Deleuze and Guattari.


The Pluriverse Post-Development Dictionary here  " offers critical essays on mainstream solutions that ‘greenwash’ development, and presents radically different worldviews and practices from around the world that point to an ecologically wise and socially just world."  The word is explained here as "The West’s universalizing tendency was nothing new, but it claimed a superior position for itself. The pluriverse consists in seeing beyond this claim to superiority, and sensing the world as pluriversally constituted. Or, if you wish, pluriversality becomes the decolonial way of dealing with forms of knowledge and meaning exceeding the limited regulations of epistemology and hermeneutics. Consequently, pluriversality names the principles and assumptions upon which pluriverses of meaning are constructed. ... Thus conceived, pluriversality is not cultural relativism, but the entanglement of several cosmologies connected today in a power differential. That power differential, in my way of thinking and doing, is the logic of coloniality covered up by the rhetorical narrative of modernity. Modernity—the Trojan horse of Western cosmology—is a successful fiction that carries in it the seed of the Western pretense to universality"

To conclude

How do we navigate the ways ahead? If we are allowed to debate the alternatives to Net Zero, how do we assess them?

Remember Orgel's Second Rule: "Evolution is cleverer than you are."

Good intentions are certainly inadequate - they can be thwarted by system dynamics, see Dietrich Dörner here.

Thursday 27 January 2022

Tools and teams as interaction metaphors

 Kranzberg's First Law (here): “Technology is neither good nor bad; nor is it neutral.” By which he means that:“technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

On Twitter, Ben Shneiderman suggested that folk build tools rather than human-AI teams. My gut instinct was that he was right, but the topic seemed worthy of a quick look. 

A tool may not be human centred; does the tool become an extension of the person (like a violin), or does the person become an extension of the machine (like in a factory). A team approach could be great or could be 'son of Clippy'. Score: Neutral to choice, One to good design practice.

Both tool and team approaches need to monitor the 'affect dilemma' in operation (see Jokers). Another one to good design practice.

'Trust' is an output variable and needs to be understood and measured but finding out how to build human-AI teams needs work on inputs. 'Trusted' and 'trustworthiness' are separate components of trust. It is worth noting that trust is also an issue for tools. Merriam-Webster notes "...the words associated with trusted mostly refer to people, while those most associated with trusty refer to animals, equipment, and tools in addition to people. We therefore say “trusty Swiss Army knife” but never “trusted Swiss Army knife”; its utility and dependability are inherent, not sought, developed, or earned. This distinction is relatively recent; it seems to have settled into its current usage by the 1940s. Shakespeare had used trusty for both meanings (“trusty servant” and “trusty sword” occur in his works), and both Dickens and Conan Doyle used trusty to describe people rather than animals or things. Emily Dickinson used the word to mean something closer to trustworthy or dependable:" Building trustworthy digital team members is non-trivial as Alexa users found out here. Similar contextual limits apply to tools e.g. cockpit automation.

Are 'recommender systems' tools or team members? Maybe there are other categories we need? Most recommenders - and most automation - are 'strong silent automation' (Woods), with all the attendant problems.

People have been making tools for a very long time, and we must have some idea of how to go about it. Building AI team members is still to happen really. So, pragmatically, score 1 to tools. Back in the late 1980's, there was much research activity into Human-Electronic Crew Teamwork (Pilot's Associate etc.), which never materialised in production. At that time,  Jack Shelnutt (so far as I could tell) did careful task tailoring to build tools that looked like they worked. How to design dialogue seems to be an art form that has come and gone e.g. here. Probably another one to tools.

I suspect the idea of automation as a team player was a counter to strong silent automation e.g. here and here to enable coordination between human and automated actions and perceptions of the world. It is not obvious how a tool metaphor could do this. Score one to teams.

The world can only be grasped by action, not by contemplation.  The hand is the cutting edge of the mind.”  Jacob Bronowski. Tools traditionally provide feedback through the control side of the loop. This is under-explored e.g. the H-metaphor here.

Strong silent automation continues under the guise of autonomy - a very strange design intent (e.g. here); to a large extent any metaphor should counter this. Goodrich, on Human Robot Interation here: "One operational characterization of autonomy that applies to mobile robots is the amount of time that a robot can be neglected, or the neglect tolerance of the robot [68]. A system with a high level of autonomy is one that can be neglected for a long period of time without interaction. However, this notion of autonomy does not encompass Turing-type notions of intelligence that might be more applicable to representational or speech-act aspects of autonomy. Autonomy is not an end in itself in the field of HRI, but rather a means to supporting productive interaction. Indeed, autonomy is only useful insofar as it supports beneficial interaction between a human and a robot." Autonomous cars need to interact e.g. with pedestrians here. Are they tools or teams?

An aspect of context of use to be considered in choice of metaphor is dynamic value alignment. There was a discussion in the service design community on 'co-creation' - could an airline booking system detect that the user was booking a holiday rather than a business trip, and automatically adjust trade-offs such as speed vs. cost. In a military situation,values may change rapidly, and 'teamwork' is about recognising this and responding quickly. Value alignment is hard. Automatic dynamic value alignment is a real challenge to automation. IF this can be done, then real teamwork is a possibility.

In conclusion, if a trusty tool metaphor looks like working, that sounds good, unless it introduces strong silent automation. Assistant-type dialogue is still difficult and real teams are still a research project. Good human-centred design practice is needed whatever.

Tuesday 21 September 2021

The State of Human-Centredness in AI and automation

 There is much talk about a human-centred approach to AI, and using AI to provide Intelligence Augmentation. For example, this talk contrasts AI as a magic beanstalk vs. AI as a tool for human use.

This post examines the practical likelihood of achieving such aims at any scale, and reviews the forces opposing the adoption of a human-centred approach to automation. Scott Berkun has done us a favour by writing an excellent example of a plea for human-centredness in design - just the sort of thing that has been ignored for decades: "We need to shift how we measure progress away from the potential in a technology and toward what people are actually able to achieve with it.... Everyone, from consumers to programmers to business leaders, must become more educated about what good design really means. For consumers, this isn’t necessarily to become designers themselves, but to become better judges of the true value of things before they buy them. Technologists and businesspeople need to understand the common traps that lead to bad design and do what they can to reduce them. This is often as simple as valuing design experts enough to listen to them at the start of projects when the important decisions are made, rather than at the end when their advice will be far too late." 

This post makes the safe assumption that Scott will be ignored, and attempts to probe the how and why of that.

In the beginning..

Nehemiah Jordan worked for RAND Corporation on the SAGE air defence system. In a series of articles in Psych. Rev. (1963), he outlined most of the human problems with introducing automation. He wrote up the lessons learned in the classic book 'Themes in Speculative Psychology' (1968). Two quotes are relevant here - on motivation, and on Allocation of Function (to human or machine).


"In designing a complex  man-machine system one should consider the human performance necessary for the system, not only from an instrumental standpoint, but also from a consummatory standpoint that is; how satisfying the job is per se. For jobs to be satisfying three conditions seen to be necessary and sufficient; they must demand of the operator the utilization of skills; they must be meaningful;- and the operator must have real responsibility. It was also asserted that although human factors engineering neglected the consummately standpoint, as long as machines were relatively crude, this neglect was not critical. With the mushrooming development of automation, however, we cannot afford this luxury any more, In designing and thinking about our new complex automated man-machine systems we must take the consummatory standpoint into account, we must learn to design for men jobs that are intrinsically interesting and satisfying."

Allocation of Function

"In other words, to the extent that man becomes comparable to a machine we do not really need him any more since he can be replaced by a machine. This necessary consequence was actually reached, but not recognized, in a later paper, also a fundamental and significant paper in human factor engineering literature. In 1954 Birmingham and Taylor in their paper: ‘A Design Philosophy for Man-Machine Control Systems’, write:‘... speaking mathematically, he (man) is best when doing least’ [1, p. 1752]. The conclusion is inescapable - design the man out of the system. If he does best when he does least, the least he can do is zero. But then the conclusion is also ridiculous....

I suggest that ‘complementary’ is probably the correct concept to use in discussing the allocation of tasks to men and to machines. Rather than compare men and machines as to which is better for getting a task done, let us think about how we complement men by machines, and vice versa, to get a task done.

As soon as we start to think this way, we find that we have to start thinking differently. The term ‘allocation of tasks to men and machines’ becomes meaningless. Rather we are forced to think about a task that can be done by men and machines. The concept ‘task’ ceases to be the smallest unit of analysis for designing man-machine systems, though still remaining the basic unit in terms of which the analysis makes sense. The task now consists of actions, or better still activities, which have to be shared by men and machines. There is nothing strange about this. In industrial chemistry the molecule is the fundamental unit for many purposes and it doesn’t disturb anybody that some of these molecules consist of hundreds, if not thousands, of atoms. The analysis of man-machine systems should therefore consist of specifications of tasks and activities necessary to accomplish the tasks. Man and machine should complement each other in getting these activities done in order to accomplish the task.

It is possible that with a shift to emphasizing man-machine comparability, new formats for system analysis and design will have to be developed, and these formats may pose a problem. I am convinced, however, that as soon as we begin thinking in proper units, this problem will be solved with relative ease. Regardless of whether this is so, one can now already specify several general principles that may serve as basic guidelines for complementing men and machines."

John Allspaw has a thread on Fitts List and the un-Fitts List here.

From the outset, we knew that the design of automation should follow from the design of jobs. Simplistically, a Plan-Design-Check-Act  (PDCA) cycle for job and organization design drives a PDCA cycle for automation. We also knew not to do 'job design by left-overs' i.e. automate that which is easy to automate, and leave people to do the rest.

As you will be aware, this is not what has happened.

Why is human-centred automation so rare compared to human replacement automation?

Chris Boorman (@CHBoorman) - in a long-gone blog post - contrasted cost-reduction human replacement automation with human centred automation: "Automation is an essential capability for enterprises seeking to innovate – whether through internal channels, acquisition or partnership. Gartner has previously stated that for many organizations 80% of time can be spent on day-to-day processes, or ‘keeping the lights on’ and this is not sustainable if they are to continue to win market share and grow in increasingly competitive markets.
Automation enables enterprises to automate those core processes not to make cuts, but to free up resource to work on new disruptive projects. Faced with an increasingly complex world of technology - cloud, mobile, big data, internet of things - as well as growing consumer expectations, every business needs to turn to automation or perish.
Automation needs to be ingrained in an organization’s DNA early on and not deployed later as a replacement measure for existing job functions. It should instead be used to allow people and resources to be more focused on driving the business forwards, rather than on just keeping the lights on.
Every industry is going through a period of change as new technologies and new entrants look to disrupt the status-quo. Automation is a key enabler for helping enterprises to disrupt their own industries and drive that change. Acquiring new customers, retaining customers, driving business analytics, consolidating enterprises following mergers or driving agility and speed are all critical business imperatives. Automation delivers the efficiency and enables the new way of thinking from your brightest talent to succeed

Prefix capitalism has devised the worst of both worlds with pre-automation: "We define pre-automation as the coincident, strategic effort to scale a workforce and monopolize a distribution network via platform while simultaneously investing in its automated replacement."

Frank Pasquale puts it this way: "All too often, the automation literature is focused on replacing humans, rather than respecting their hopes, duties, and aspirations. A central task of educators, managers, and business leaders should be finding ways to complement a workforce’s existing skills, rather than sweeping that workforce aside. That does not simply mean creating workers with skill sets that better “plug into” the needs of machines, but also, doing the opposite: creating machines that better enhance and respect the abilities and needs of workers. That would be a “machine age” welcoming for all, rather than one calibrated to reflect and extend the power of machine owners."

Well-run organizations with a human-centred approach e.g. using Henry Stewart's Happy Manifesto or ISO 27500:2016 would have no great problem with human-centred automation. Similarly, proper Lean organizations such as Toyota. However, such organizations are rare and against the grain. Theory Y is rare compared to Theory X in practice. Bullshit Jobs (Graeber) are everywhere, and organizations seem to have adopted The Gervais Principle (Rao) as a manual. In developing ISO TS 18152 we found that to link job design and automation took a ton of activities at all levels of management, and at all stages of the lifecycle. Current organizations and project structures really do not do human-centredness unless forced to.

Hostile business models have more or less stopped any chance of positive User Experience (UX), as noted by Mark Hurst here. Prefix Capitalism (Tante) is propagating Chickenized Reverse Centaurs (Cory Doctorow) , shitty automation, the surveillance panopticon, with added ethicswashing. A human-centred approach to the financialised world would include the challenging task of supporting 'investee activism' (Feher) and 'arts of doing' (De Certeau).

Globalization and expansion to society level

Automation has extended to a global level, interacting with society as a whole (e.g. Facebook algorithms, where user issues include privacy and identity - a long way from issues of numbers of mouse clicks). This is being addressed as a battle of words between The Lords of the Valley and elected politicians. Going swimmingly. The European Union seems to be the regulator for Silicon Valley, but the focus is on software and data. The reaction by Google and others to the proposed EC AI Regulation more or less demonstrates its necessity. The EC proposed Regulation addresses important risks, but does not attempt to meet the stated aim of being human-centric. Niels Bjorn- Andersen (1985) raised the question of “whether all our (the HF community) intellectual capacity, energy and other precious resources are being utilized to:
- Soften the technology to make it more compatible with human beings (through removing the flicker in order not to damage the eyes, detaching the keyboard in order not to damage the back of the operator, making it so easy to use that “even a child or a mentaly retarded person can use it” etc.) and in this way provide a sugar coating on the pill so that it may be swallowed more easily, or whether
- we are genuinely contributing to the attainment of true human values.

(Bjorn- Andersen, N. ‘Are “Human Factors” human?’, Contribution to Man Machine Integration, State of the Art Report, Pergamon Infotec, Jan 1985.)

The EC proposed Regulation is definitely in the first camp, as has been pointed out by ETUI and here.

In contrast, the Principles of Human Centred Design (ISO 9241-210:2019) are:

  1. The design is based upon an explicit understanding of users, tasks and environments
  2. Users are involved throughout design and development
  3. The design is driven and refined by user-centred evaluation
  4. The process is iterative
  5. The design addresses the whole user experience
  6. The design team includes multidisciplinary skills and perspectives

At a society level, the analysis of a potential 'robot takeover' is being done in a top down manner by *economists* using a watered-down version of Fitts List, and Human Replacement Automation. What could possibly go wrong? (A succinct thoughtful analysis of jobs and automation is provided by Benanav).

The relationship between people and nature has lost much in the change from 'indigenous' to 'urban'. This piece uses 'human-centred' in a valid accusatory manner. The defence of human-centredness would be to say that the design intent of suburban life being criticised is 'less-than-human centred' and that the relationship with nature is a part of human-centredness. However, it would be hard to find examples in practice so labelled - a hypothetical defence using a possible future human-centredness.

State of human-centredness and AI / ML

Some sectors have taken a human-centred approach to AI/ML in their sector:

Autonomous Urbanism and NACTO "The cautious optimism that characterized the first edition of the Blueprint for Autonomous Urbanism, published in 2017, has been tempered by recognition of the enormity of the policy foundation that must be laid for us to reach a human-focused autonomous future. Like the first Blueprint, this edition lays out a vision for how autonomous vehicles, and technology more broadly, can work in service of safe, sustainable, equitable, vibrant cities. This vision builds on and reinforces the past decade of transformative city transportation practice. It prioritizes people walking, biking, rolling, and taking transit, putting people at the center of urban life and street design, while taking advantage of new technologies in order to reduce carbon emissions, decrease traffic fatalities, and increase economic opportunities....Automation without a comprehensive overhaul of how our streets are designed, allocated, and shared will not result in substantive safety, sustainability, or equity gains. By implementing proactive policies today, cities can act to ensure that the adoption of AV technologies improves transportation outcomes rather than leading to an overall increase in driving."

The American Medical Association has a policy: "Our AMA advocates that:

  • AI is designed to enhance human intelligence and the patient-physician relationship rather than replace it Oversight and regulation of health care AI systems must be based on risk of harm and benefit accounting for a host of factors, including but not limited to: intended and reasonably expected use(s); evidence of safety, efficacy and equity, including addressing bias; AI system methods; level of automation; transparency; and conditions of deployment
  • Payment and coverage for all health care AI systems must be conditioned on complying with all appropriate federal and state laws and regulations, including but not limited to those governing patient safety, efficacy, equity, truthful claims, privacy and security, as well as state medical practice and licensure laws
  • Payment and coverage for health care AI systems intended for clinical care must be conditioned on•Clinical validation•Alignment with clinical decision-making that is familiar to physicians•High-quality clinical evidence
  • Payment and coverage for health care AI systems must •Be informed by real-world workflow and human-centered design principles•Enable physicians to prepare for and transition to new care delivery models•Support effective communication and engagement between patients, physicians and the health care team•Seamlessly integrate clinical, administrative and population health management functions into workflow•Seek end-user feedback to support iterative product improvement
  • Payment and coverage policies must advance affordability and access to AI systems that are designed for small physician practices and patients and not limited to large practices and institutions
  • Government-conferred exclusivities and intellectual property laws are meant to foster innovation, but constitute interventions into the free market, and therefore should be appropriately balanced with the need for competition, access and affordability."

While welcome, the state of such initiatives is orders of magnitude less than what is needed - even within healthcare AI. The state of ML in healthcare seems pretty much GIGO, see here and here and here and here and here . Also, this paper on the myth of generalisability in ML would have been transformed by a modicum of understanding of 'context of use' and 'Quality In Use'.

In the context of 'killer robots', there are no abstracts on "meaningful human control" (as of 04 May 2021) in psyarxiv and 2 in CS arxiv - one of which is relevant.

More generally, a search of Arxiv CS (27/3/2021) revealed 3573 refs to "gradient descent" (as a baseline), 13 refs to "hybrid intelligence", 3 refs to "augmented intelligence", 3 to Licklider, 0 to Engelbart. A search of Psyarxiv showed 0 refs to "augmented intelligence" and 1 ref to "hybrid intelligence".

While there is good work going on, it is not moving the needle at all. Alan Winfield has summarised his situation here: "We roboticists used to justifiably claim that robots would do jobs that are too dull, dirty and dangerous for humans. It is now clear that working as human assistants to robots and AIs in the 21st century is dull, and both physically and/or psychologically dangerous. One of the foundational promises of robotics has been broken. This makes me sad, and very angry."

The 'think like a Centaur' work at OIO on Roby and its successors is the exception that proves the rule.


There has been a line of work looking at the Human Factors of automation (e.g. Bainbridge's Ironies of Automation), characterized by good technical quality and massive lack of impact. Nearly all automated systems still make the same well-documented mistakes first noted by Jordan. At a practical level, these adverse consequences of poor automation can normally be addressed by mainstream risk / issue management. This very rarely happens.  Indeed, it seems harder to introduce usable technology now that it was in the past. The between technical activity and concern for people seems deeply embedded and hard to bridge. The problems of automation and algorithms are not new or transitory. Very likely they go back to the beginnings of labour, capital, and debt (e.g. when storing grain became possible).

The Western capitalist hegemony is deeply antithetical to human-centredness (remember that the subtitle of 'Small is Beautiful' was 'Economics as if people mattered' - hardly the Amazon corporate handbook), from the level of a corporate project through to societal effects. Competent practitioners with good stakeholder support can show what can be done, but Human Centred Design will remain a niche activity. If human-centredness is to make any impact at all, then it is time for some completely fresh approaches. Fortunately, the time is ripe for just such fresh approaches but the scale of the opportunity is somewhat daunting.

In conclusion, this Arthur C. Clarke quote on automation and jobs from 1969:

GENE: But you see the average person doesn’t see it. All he sees is that he’s going to be replaced by a computer, reduced to an IBM card and filed away.

CLARKE: The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.

GENE: Precisely. Now, we feel that if only this idea had come across in “2001,” instead of depicting machines as ominous and destructive. . .

CLARKE: But it would have been another film. Be thankful for what you’ve got. Maybe Stanley wasn’t interested in making that kind of film.

Engineers and Human Values

 “If it weren't for the people, the god-damn people' said Finnerty, 'always getting tangled up in the machinery. If it weren't for them, the world would be an engineer's paradise.” - Kurt Vonnegut, Player Piano

"Nice thread, but thinking of AI as “user-centered” is a narrow view. Shouldn’t the real goal of AI be to create truly autonomous intelligent beings rather than servants for human purposes? We’re just building smarter screwdrivers today." Ali Minai @barbarikon

The failure of engineers to understand user and stakeholder needs and values is an old problem. From Plato and his Dialogues "The Republic"
[suggestion - read painter / imitator as marketing]:

“Will a painter, say, paint reins and bridle?” “But a saddler and a smith will make them?” “Certainly.”
“Does the painter know what the reins and the bridle ought to be like? Or is it the case that not even the smith and the saddler, who made them, know that, but only the horseman, the man who knows how to use them?” “Very true.”
“And shall we not say the same about everything?” “What?”
“That there are three arts concerned with each thing —one that uses, one that makes, and one that imitates?” “
“Then are the virtue and beauty and correctness of every manufactured article and living creature and action determined by any other consideration than the use for which each is designed by art or nature?” “Then it is quite inevitable that the user of each thing should have most experience of it, and should be the perso.n to inform the maker what are the good and bad points of the instrument as he uses it. For example, the flute-player informs the flute-maker about the flutes which are to serve him in his fluting; he will prescribe how they ought to be made. and the. maker will serve him” “Surely.”
‘ Then he who knows gives information about good and bad flutes, and the other will make them, relying on his statements?” “Yes.”
“Then the maker of any article will have a right belief concerning its beauty or badness, which he derives from his association with the knower, and from listening, as he is compelled to do, to what the knower says; but the user has knowledge?” “Certainly.”

This post is partly in response to a well-considered article on the need for engineers to understand human values and adopt systems thinking here. The concern with the article is that its aspirations are doomed.

 Update: To an extent, it could be considered a diagnosis of the 'Engineer's Disease' discussed here, with differing versions here and here. I was alerted to the disease by Paul Graham Raven with this post.

I have had the pleasure and privilege to work with folk from many different backgrounds who have practiced Human Centred Design (HCD) well. Engineers who 'get' human values and HCD can be powerful forces for good. However, they are the exception that proves the rule. Building artefacts that reflect human values needs multi-disciplinary teamwork if the process is to deliver dependably. The idea that engineers can embrace the consideration of human values as a result of a training course is a doomed hope. This post presents some of the ways in which engineers frequently and persistently fail to consider human values. The logic is that any one of these ways can be sufficient to prevent a system reflecting human values.

Autogamous technology

Gene I Rochlin defined autogamous technology as self-pollinating and self-fertilizing, responding more and more to an inner logic of development than the needs an desires of the user community. The term has not found widespread use. However, the existence of such technology is widespread, perhaps characterized by the Internet Fridge, and the Internet of Shit.  Is it realistic to expect engineers to be able to answer 'Question Zero' here - quite probably not. If not engineers, then who?

Nigel Bevan persuaded the software standards community that the purpose of quality during design was to achieve Quality In Use (QIU).  Why else would anyone build a system? This post does not get to the bottom of that question but provides some pointers as to why building a system that does not reflect human needs and values is routine.

Monastic seclusion

The archetypal approach to engineering is for one or more engineers to work in a lab or garage to bring their creation to life. This is a secluded environment, free of distractions. The Human Centred Design approach is very out-and-about and social, listening to users and stakeholders, trying things out, and working in a multi-disciplinary team (see below). Many engineers (and egotistical industrial designers) treat such an approach with contempt and see it as interfering with real work.

Principles of Human-Centred Design ISO 9241-210:2019
5.2 The design is based upon an explicit understanding of users, tasks and environments
5.3 Users are involved throughout design and development
5.4 The design is driven and refined by user-centred evaluation
5.5 The process is iterative
5.6 The design addresses the whole user experience
5.7 The design team includes multidisciplinary skills and perspectives

In 'What Engineers Know and How They Know It', Walter Vicente says  "artifactual design is a social activity." Chapter 3 of the book gives an account of how flying qualities were re-conceptualized over a ten year period of engineers and pilots working very closely as a team.

In some situations, it is possible for engineers to relate to the user and context of use directly. For example, Toyota engineers:

'As Kousuke Shiramizu, Lexus quality guru and executive vice president explains, “ Engineers who have never set foot in Beverly Hills have no business designing a Lexus.Nor has anybody who has never experienced driving on the Autobahn firsthand.”'

"The story concerns a chief engineer who moved in with a young target family in southern California to enhance his understanding of the generation X lifestyle associated with RAV Four customers. While developing Toyota’s successful 2003 Sienna, the Sienna CE drove his team in Toyota’s previous minivan model more than 50,000 miles across North America through every part of Canada, the United States, and Mexico. The CE experienced a visceral lesson in what is important to the North American minivan driver and discovered in every locale new opportunities for improving the current product. As a result, the Sienna was made big enough to hold full sheets of plywood while the turning radius was tightened, more cupholders were added, and cross-wind stability was enhanced, among many other improvements that resulted from this experience."

Both of the above from 'The Toyota Product Development System' by James M. Morgan and Jeffrey K. Liker

In other situations, the impact of as proposed system on various groups and their context of use may not be intelligible or accessible directly, and a plan of work is required, possibly including the use of resources such as ergonomists or anthropologists.

Engineering values and humanity

Nicholas Carr hits the nail on the head about the values implicit in automation here. "Google’s Android guru, Sundar Pichai, provides a peek into the company’s conception of our automated future:
“Today, computing mainly automates things for you, but when we connect all these things, you can truly start assisting people in a more meaningful way,” Mr. Pichai said. He suggested a way for Android on people’s smartphones to interact with Android in their cars. “If I go and pick up my kids, it would be good for my car to be aware that my kids have entered the car and change the music to something that’s appropriate for them,” Mr. Pichai said.

What’s illuminating is not the triviality of Pichai’s scenario — that billions of dollars might be invested in developing a system that senses when your kids get in your car and then seamlessly cues up “Baby Beluga” — but what the urge to automate small, human interactions reveals about Pichai and his colleagues. With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be “freed up” to do something “more valuable.” Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.

I guess it’s no surprise that what Pichai expresses is a robot’s view of technology in general and automation in particular — mindless, witless, joyless; obsessed with productivity, oblivious to life’s everyday textures and pleasures. But it is telling. What should be automated is not what can be automated but what should be automated
." [emphasis added].

Abeba Birhane et al have ascertained the values implicit in ML here:

"We reject the vague conceptualization of the discipline of ML as value-neutral. Instead, we investigate the ways that the discipline of ML is inherently value-laden. Our analysis of highly influential papers in the discipline finds that they not only favor the needs of research communities and large firms over broader social needs, but also that they take this favoritism for granted. The favoritism manifests in the choice of projects, the lack of consideration of potential negative impacts, and the prioritization and operationalization of values such as performance, generalization, efficiency, and novelty. These values are operationalized in ways that disfavor societal needs, usually without discussion or acknowledgment. Moreover, we uncover an overwhelming and increasing presence of big tech and elite universities in highly cited papers, which is consistent with a system of powercentralizing value-commitments. The upshot is that the discipline of ML is not value-neutral. We find that it is socially and politically loaded, frequently neglecting societal needs and harms, while prioritizing and promoting the concentration of power in the hands of already powerful actors."

User information needs

Bainbridge's Ironies of automation here are still unresolved and the problems of supervisory control frequently unaddressed. Donald Michie wrote about the need for a 'human window' into AI systems in the 1980's. Forty years later, the ML community sees even 'syntactic sugar' (Michie) as an optional research topic. In a sense this is a continuation of the failure-prone 'strong, silent automation' (Woods). Briefly put, engineers left to themselves will continue to ignore user information needs.

Belletristic vs. practical approach to work

Look around design offices or software development offices and examine the books; manuals, catalogues, standards. For all practical purposes you will not find an anthropology journal. Researching human values, societal impact etc. is the bookish sort of activity that design engineers don't do. Engineers also tend to ask how not why.

Stack fallacy

Stack fallacy - here -  is the mistaken belief that it is trivial to build the layer above yours. The Socio-Technical System that an engineered artefact enters may be several layers above the competence of the engineers involved.

"The bottleneck for success often is not knowledge of the tools, but lack of understanding of the customer needs. Database engineers know almost nothing about what supply chain software customers want or need. They can hire for that, but it is not a core competency."


In 'Technics and Time', Bernard Stiegler says "as a 'process of exteriorization,' technics is the pursuit of life by means other than life"

Adrienne Mayor (here) has shown that the quest to build 'life through craft' - biotechne - goes back at least as far as Classical times, with Talos.

This post is first step over some deep waters. Relevant writers include Romanyshyn, Yuk Hui, Dryzek etc.but the drive to create a machine that is monstrous and to then abdicate responsibility for it (Facebook, Amazon, and others) indicates a deeply-held darkness in our psyche and culture.


David Noble has studied the ways in which religion (forms of Christianity) and technology are intertwined, and examined the religious motivation behind the development of technology.

"When people wonder why the new technologies so rarely seem adequately to meet their human and social needs, they assume it is because of the greed and lust for power that motivate those who design and deploy them. Certainly, this has much to do with it. But it is not
the whole of the story. On a deeper cultural level, these technologies have not met basic human needs because, at bottom, they have never really been about meeting them. They have been aimed rather at the loftier goal of transcending such mortal concerns altogether. In such
an ideological context, inspired more by prophets than by profits, the needs neither of mortals nor of the earth they inhabit are of any enduring consequence. And it is here that the religion of technology can rightly be considered a menace. (Lynn White, for example, long ago identified the ideological roots of the ecological crisis in "the Christian dogma of man's transcendence of, and rightful mastery over, nature"; more recently, the ecologist Philip Regal has likewise traced current justifications of unregulated bioengineering to their source in late-medieval natural theology
.)" (The Religion of Technology, p206- 207)

Featuritis as a substitute for understanding use

"Creativity is not a process...It’s people who care enough to keep thinking about something until they find the simplest way to do it." Tim Cook

“Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.” — Charles Mingus.

 "A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." - Antoine de Saint-Exupery

The problems with simplicity are as follows:

  1. Lots of engineers do not care enough about user need or societal impact to keep thinking about it (cf. Tim Cook).
  2. Lots of engineers want the machine they are bringing to life to be as advanced and complicated as possible.
  3. Finding simplicity that gives a Happy User Peak (Kathy Sierra below) means getting out of the lab and listening to people.
  4. Adding in loads of features means there is bound to be something for everybody (if they can find it}.
  5. More features means you are on the job for longer.

"Creeping featurism ... is the tendency to add to the number of functions that a device can perform, often extending the number beyond all reason." Don Norman. The alternative to simplicity is typified by featuritis - here, here and here (Kathy Sierra). Thomas Landauer wrote 'The Trouble with Computers' in 1995 - here - but the culture has not changed much since.

Systemic vs. systematic thinking

Many engineers are happy doing systematic thinking in the complicated domain (Cynefin), and are unhappy coping with emergence, thinking systemically, working in the complex domain. Notes on the difference here here here. Acting to meet human values requires systemic thinking and many engineers are never going to be up to that. It seems that engineers that don't 'get' complexity are not amenable to change via a short course (or perhaps even lived experience).

[Found on Twitter]

Wednesday 8 September 2021

Does productivity matter?


The Solow Computer Paradox, or IT productivity paradox has been running for a while now. The latest installment of the mysteries of productivity has been published recently here.

Obviously, we cannot expect economists to tell us anything useful, so a short listicle on practical reasons for the paradox may help. The Black Box approach to organisations taken by economists is unable to support human-centred automation vs. human-replacement automation. A Glass Box approach is required for this. Good intentions that lead nowhere useful can be found here and here (both pdf).


Human activity in physical space is fully exposed to the panopticon of surveillance capitalism and Digital Taylorism here. Chickenized cyborg gig-economy jobs under algorithmic management dominate sectors such as logistics.Within a limited framework, these dehumanized enterprises are being 'optimised' for productivity. One can only hope that the Gradgrinds doing this find themselves locked into a pre-Ocado business model and fail. Attempts at micro-surveillance (bossware, tattleware etc.) in cognitive, social, creative etc. settings backfire, and certainly don't lead to anything resembling real productivity.

Financial engineering

Productivity is a topic of importance to an age of industrial engineering, but of questionable relevance to an age of financial engineering. Anglo-Saxon capitalism has been in the latter for some years. Try and invest based on 'value' or 'company fundamentals'. See here and here.

Productive enterprise as busted myth

The Gervais Principle here shows the organisation as a dysfunctional structure with matters other than optimal productivity on its mind.

Functional stupidity (here pdf) limits individual and collective cognition in an 'information age'. Perhaps the key factor in current productivity shortfalls.

Bullshit Jobs here are all too prevalent, and there seems to be no effort to eradicate them. "The market has a natural tendency to undersupply good jobs." - delicately put by Acemoglu here pdf.

Gammon's Law of Bureaucratic Displacement here is not restricted to a few public sector organisations. " in a bureaucratic system … increase in expenditure will be matched by fall in production …. Such systems will act rather like ‘black holes’ in the economic universe, simultaneously sucking in resources, and shrinking in terms of ’emitted’ production.

 Bureaucracy’s most destructive effectsare due to its permeation and impairment of the activities of non-administrative staff.

An example is the progressive transformation of nurses from patient-centred carers to administroids whose requirement to produce detailed patient care plans and participate in workshops and seminars leaves them little time to attend to patients’ basic dietary needs or prevent them developing pressure ulcers.

The second major cause derives from the mechanical nature of bureaucracy. Its proliferation is not simply the product of individual empire building. Although a bureaucratic organisation encourages, and is nourished by, individual self-interest, proliferation is inherent in the system itself.

Similarly, Pournelle's Iron Law of Bureaucracy: "Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people":

First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration.

Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc.

The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization

Parkinson's Law is worth revisiting here.

Socio-Technical Systems here is a well-established approach to designing effective, productive organisations, but is seen as a specialist interest.

Office layout

Ever since the Action Office was subverted into cubicles, office layout has been determined by unaccountable bureaucrats with no consideration of productivity (with some exceptions of course). Open plan offices and the 'creativity' demanded now are basically incompatible. If enterprises had any real interest in productivity, this situation would have changed long ago.

Central Banking

Cheap debt from central banks is keeping Zombie companies alive. These companies increase 'productivity dispersion'. Their continued existence highlights the lack of interest in productivity.

Labour market

Labour market tightness may be necessary for productivity increases - here. A post-covid possibility.

Inappropriate automation and technology

"The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency." Bill Gates

Despite the introduction of cloud documents for group working, office technology has been functionally fossilised for a long time. Simply put: When did everyone stop using Powerpoint? The tools for WFH similarly make no real use of technology for more effective working e.g. do any video conferencing tools use automatic mediation here pdf? How is the budget for facilitator training? How many firms have flipped their offices here? Does management know that email is not work here? Is there any scaleable use of lessons learned from CSCW? Recent automation of the hiring process (and people analytics generally) is awe-inspiringly dreadful.

In short, productivity doesn't seem to matter, apart from Bezos' galley-slaves.

As a footnote, if you know any of those souls who think that the robots will do all the work and we can sit around being creative and radical on UBI; treat them with compassion but do not join them in their delusion. We are all off to the Precariat if we don't get organised.