Sunday, 3 January 2016

Human aspects of automation - The 'Jokers'

I propose four 'Jokers' to be considered in the design and operation of automated / autonomous systems. These are not 'risks' as normally managed, though there may be ways of managing them for people who have taken the red pill. The Jokers are:
  • Affect dilemma: Users WILL attribute a personality to your system and act on it, which may or may not match the behaviour of the system.
  • Risk compensation: Users WILL use systems installed for safety purposes to achieve commercial gain.
  • Automation bias: Users WILL trust the system when they shouldn't.
  • Moral buffering: Remoteness brings moral and ethical distance. Users WILL become morally disengaged.
The Jokers need to be addressed during design and operation. There are no simple means of 'mitigating' or 'treating' them.   To a large extent, engineers have got away with minor informal treatment of the (unrecognised) Jokers. This won't be possible with Robotics and Autonomous Systems.

Affect dilemma

Whether you intend it or not, your computer will be assigned a personality by its users e.g. Tamagotchi effect.  This doesn't just apply to social robots; nuisance alarms and other such 'technical' features will be used by the users in assigning a personality to the computer, and this will drive their interaction with it. This seems to be an area well short of having 'best practice' and may just need lots of monitoring, with corrective action where possible. Giving the interface personality human values sounds a good start.

Risk compensation

Wikipedia has a good entry on risk compensation. Despite being a well-accepted phenomenon, I have yet to encounter its explicit treatment in design, operation, or regulation. I should be delighted to hear of its appearance in a single safety case. 'Shared Space' stands out as a cultural oddity.
Risk compensation triggered by regulation is termed the Peltzman Effect.
[Note: Wilde's risk homeostasis is not being discussed here.]

Automation bias

"The automation's fine when it works" Margareta Lützhöft. Problems can arise when it doesn't. The reliability of modern automation means that it makes sense for the user to rely on it without checking. A summary from a paper by Missy Cumming:
"Known as automation bias, humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct (Mosier & Skitka, 1996; Parasuraman & Riley, 1997).  Automation bias is particularly problematic when intelligent decision support is needed in large problem spaces with time pressure like what is needed in command and control domains such as emergency path planning and resource allocation (Cummings, 2004). Moreover, automated decision aids designed to reduce human error can actually cause new errors in the operation of a system.  In an experiment in which subjects were required to both monitor low fidelity gauges and participate in a tracking task, 39 out of 40 subjects committed errors of commission, i.e. these subjects almost always followed incorrect automated directives or recommendations, despite the fact that contraindications existed and verification was possible (Skitka et al., 1999). "
Kathleen Mosier has shown that automation bias is surprisingly resistant to extra users or training, and that automation can lead to new, different types of error. AFAIK, automation bias is not addressed in Human Reliability Analysis, or explicitly addressed in design or operation. It is recognised as a concern in reports by the CAA and Eurocontrol.
The blame-the-human language of over-reliance is unwelcome but unsurprising. It begs the question of what would be optimal reliance. “The reason that current research does not unequivocally support the presence of complacency is that none of the research known has rigorously defined optimal behaviour in supervisory monitoring” (Moray & Inagaki, 2000)
Measures of trust, including trustworthiness, trustedness, trust miscalibration may need to be part of the answer. The Yagoda trust scale is of potential use in this context.
It could reasonably be argued that automation bias is a consequence of the affect dilemma. My grounds for having two separate Jokers is that, even when not independent, they are  separate concerns from a design or operational point of view.

Moral buffering

Dumping your boyfriend by text message. Letting people go by email. "Distant punishment" in modern warfare. Moral buffering. The moral buffer is described by Missy Cummings.
"The concept of moral buffering is related to but not the same as Bandura's (2002) idea of moral disengagement in which people disengage in moral self-censure in order to engage in reprehensible conduct. A moral buffer adds an additional layer of ambiguity and possible diminishment of accountability and responsibility through an artifact or process, such as a computer interface or automated recommendations. Moral buffers can be the conduits for moral disengagement, which is precisely the reason for the need to examine ethical issues in interface design."
People can exploit moral buffering to generate the 'Agency Problem' as set out by Nassim Nicholas Taleb:
"Solution to the AGENCY PROBLEM: Never get on a plane unless the person piloting it is also on board.
Generalization: no-one should be allowed to declare war, make a prediction, express an opinion, publish an academic paper, manage a firm, treat a patient, etc. without having something to lose (or win) from the outcome
."
Taleb links the agency problem to 'skin in the game'.
A classic demonstration of moral buffering is the 'Button Defense' in 'How To Murder Your Wife' - "Edna will never know".


The Jokers are due to appear in a paper in the Safety Critical Systems Club Newsletter, which will give them a proper citation. To be added when published this month.
There is some overlap between the Jokers and BS8611. To be the subject of a future post.

Wednesday, 30 December 2015

Providing assurance of machine decision making

All Models Are Wrong But Some Are Useful” -George Box

The aim of Human-Machine Teams (HMT) is to make rapid decisions under changing situations characterised by uncertainty. The aim of much modern automation is to enable machines to make such decisions for use by people or other machines. The process of converting incomplete, uncertain, conflicting, context-sensitive data to an outcome or decision needs to be effective, efficient, and to provide some freedom from risk. It also may need to reflect human values, legislation, social justice etc. How can the designer or operator of such an automated system provide assurance of the quality of decision making (potentially to customers, users, regulators, society at large)? 'Transparency' is part of the answer, but the practical meaning of transparency has still to be worked out.

The philosopher Jurgen Habermas has proposed that action can be considered from a number of viewpoints. To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental action and strategic action. Strategic action is part-technical, part-social and refers to the decision-making procedure, and is a the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing by values and maxims. It may be that Value Sensitive Design forms a useful supplement to Human-Centred Design to address values.

The Myth of Rationality

"Like the magician who consults a chicken’s entrails, many organizational decision makers insist that the facts and figures be examined before a policy decision is made, even though the statistics provide unreliable guides as to what is likely to happen in the future. And, as with the magician, they or their magic are not discredited when events prove them wrong. (…) It is for this reason that anthropologists often refer to rationality as the myth of modern society, for, like primitive myth, it provides us with a comprehensive frame of reference, or structure of belief, through which we can negotiate day-to-day experience and make it intelligible."
Gareth Morgan

The Myth of Rationality is discussed e.g.  here.  The limits of rationality (or perhaps its irrelevance) in military situations  should be obvious. If you need a refresher, then try Star Trek 'The Galileo Seven'. The myth of the rational manager is discussed here. This is not to say that vigilant decision making is a bad thing - quite the opposite. As Lee Frank points out, rationality is not the same as being able to rationalise.

The need for explanation / transparency

The need for transparency and/or observability is discussed in a previous post here. There is an interaction between meeting this need and the approach to decision making. AFAIK the types of Machine Learning (ML) currently popular with the majors cannot produce a rationalisation/explanation for decisions/outcomes, which would seem a serious shortcoming for applications such as healthcare. If I am a customer, how can I gain assurance that a system will give the various users the explanations they need?

Approach to decision making

It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits; it is evidently equally foolish to accept probable reasoning from a mathematician and to demand from a rhetorician scientific proofs.” Aristotle
At some point, someone has to decide how the machine is to convert data to outcomes (what might have been called an Inference Engine at one point). There is a wide range of choices; the numeric/symbolic split, algorithms, heuristics, statistics, ML, neural nets, rule induction. In some cases, the form of decision making is inherent in the tool used e.g. constraint-based planning tool, forward-chaining production system, truth maintenance system etc. There are choices to be made in search (depth vs. breadth) and in types of logic or reasoning to be used. There were attempts before the AI winter to match problem type to implementation but IMHO they didn't finish the job, and worked-up methodologies such as CommonKADS would be a hard sell now. So, what guidance is available to system designers, and what forms of assurance can be offered to customers at design time? Genuine question.

Sunday, 20 December 2015

Human Machine Teaming - Data Quality Management

"A mathematician is a man who is willing to assume anything except responsibility." (Theodore von Karman)

"Rapid, effective decision making under conditions of uncertainty whilst retaining Meaningful Human Control (MHC)" is the sort of mantra associated with Human Machine Teaming (HMT). A purely mathematical approach to risk and uncertainty is unlikely to match the needs of real world operation, as Wall St. has discovered.

So, during the design of a system where the data are potentially incomplete, uncertain, contradictory etc. how does the designer offer assurance that data quality is being addressed in an appropriate manner? Or are we doomed to crafted systems on the basis of "trust me"?

Not all forms of uncertainty should be treated in the same way; this applies to data fusion, say, and most other tasks. It is my impression that the literature on data quality and information quality is not being used widely in the AI, ML, HMT community just now - I'd be delighted to be corrected on that.

 ISO/IEC 25012 “Software Engineering – Software Product Quality Requirements and Evaluation (SQuaRE) – “Data Quality Model”, 2008 categorises quality attributes into fifteen characteristics from two different perspectives: inherent and system dependent ones. This framework may or may not be appropriate to all applications of HMT but it makes the point that there is more than just "uncertainty". Richard Y Wang has proposed that "incorporating quality information explicitly in the development of information systems can be surprisingly useful"  in the context of military image recognition.

HMT takes place in the context of Organisational Information Processing. The good news is that this is quite well-developed for flows within an organisation (less so for dealing with an opposing organisation). The bad news is that Weick is hard work. The key term is equivocality, and I suggest that the HMT community use it as an umbrella term, embracing 'uncertainty' and other such parameters. Media richness theory helps.

"A man's gotta know his limitations" (Clint Eastwood). "So does a robot" (BSJ)

A key driver for data quality management is whether a system (or agent etc.) assumes an open world or a closed one. Closed world processing has to know the fine details e.g. how a Google self-driving car interacts with a cyclist on a fixed-wheel bicycle.  By contrast, GeckoSystems  takes an open world approach to 'sense and avoid' and doesn't have to know these fine details. It would seem that closed world processing needs explicit treatment of data quality to avoid brittleness.

Time flies like an arrow, fruit flies like a banana.

At some point, the parameters acquire meaning, or semantic values. "We won’t be surfing with search engines any more. We’ll be trawling with engines of meaning." (Bruce Sterling). The parameters may be classified on the basis of a folksonomy, or the results of knowledge elicitation. So far as I can see, the Semantic Revolution has a way to run before achieving dependable performance. Roger Schank has been fairly blunt about the present state of the art. Semantic parameters are likely to have contextual sensitivity, which may be hard to characterise.

If a system is to support human decision making, then it may need to provide information well beyond that required analytically for the derivation of a mathematical solution. Accordingly, the system may need to manage data about the quality of processing.   For robotic state estimation, the user may need more than a point best estimate. Confidence estimates may need to be expressed in operational terms, rather than mathematical ones. Indeed, the HMT may need to reason about uncertainty as much as under uncertainty.

This post is scrappy and home-brewed. Suggestions for improvement are welcome. If I am anywhere near right, then the state of art needs advancing quite swiftly. As a customer I wouldn't know how to gain assurance that the management of data quality would support safe and effective operation, and as a project manager, I wouldn't know how to offer such assurance.

Update: This is nice on unknown unknowns and approaches to uncertainty in data:

Friday, 11 December 2015

Human-Machine Teaming - meeting the Centaur challenge

At the centre of the US DoD Third Offset is Human-Machine Teaming (HMT), with five building blocks:
  1. Machine Learning
  2. Autonomy / AI
  3. Human-Machine Collaboration
  4. Assisted human operations
  5. Autonomous weapons.
The analogy with Centaur Chess is a powerful one, and potentially offers the best use for both people (H) and machines (M). However, this approach is not easy to implement.This post is a quick look at some issues of design and development for HMT. Other aspects of HMT will be addressed in subsequent posts (hopefully).

1. Human-Centred Automation

The problems of SAGE, one of the first automated systems, were well-documented in 1963. Most automated systems built now still have the same problems. "H" managed to get the UK MoD to use the phrase "So-called unmanned systems" to reflect their reality. There are people working on autonomous systems who really believe there will be no human involvement or oversight. These people will, of course, build systems that don't work. In summary, the state of the art is not good - an engineering led technical focus leads to "human error".
The principles of human-centred automation were set out by Billings in 1991:
  • To command effectively, the human operator must be involved.
  • To be involved, the human operator must be informed.
  • The human operator must be able to monitor automated systems.
  • Automation systems must be predictable.
  • The automated system must also be able to monitor the human operator.
  • Each of the elements of the system must have knowledge of the other’s intent. 
We know a great deal about the human aspects of automation. The problem is getting this knowledge applied.
There is a considerable literature on technical aspects of HMT, including work on the Pilot's Associate / Electronic Crewmember. The challenge is with getting this expertise used.

2. Human-System Integration process

Human-System Integration (HSI) is more talked-about than done. For HMT, HSI has to be pretty central to design, development, and operation. This will require enlightened engineers, programmers, risk managers etc. There are standards etc. for HSI (e.g. the Incremental Commitment Model), though these do not address HMT-specific matters.

The state of Cognitive Systems Engineering (CSE) is lamentable. I can take some share of the blame here, having dropped my topics of interest in the AI winter (the day job got in the way). Nearly all of it is academic as opposed to practical. Some of the more visible approaches have very little cognition,minimal systems thinking and no connection with engineering. Gary Klein's work is probably the best place to find practical resources (starting with Decision Centred Design).
MANPRINT; the integration of people and machines may go very deep and require closer coupling of Human Factors Engineering and Human Resources (selection, training, career structures etc) than has been the case to date. Not easy at scale.
Simulation-based design is probably the way to achieve iteration through to wargaming to support operation. Obviously there are issues of fidelity (realism) here, but they should be manageable.

3. Capability, ownership, responsibilities

The industrial capability to deliver HMT is limited, and the small pool of expertise is divided by the AI winter. Caveat emptor will be vital, and specalist capability evaluation tools for HMT don't exist (though HSI capability evaluation tools could be expanded to do the job). 'Saw one, did one, taught one' won't work here unless you want to fail.
The data (big or otherwise), algorithms, heuristics, rules, concepts, folksonomies etc. are core to military operations (and may be sensitive). It would be best if they were owned and managed by a responsible military organisation, rather than a contractor. In a sense, they could be considered an expansion of doctrine.

4. Test, acceptance

If HMT is to work as a team, then it may well be that the M becomes personally tailored to the individual H. This goes against military procurement in general, and raises questions about how to conduct T&E and acceptance. If the M evolves in harmony with the H, then this raises further difficulties. Not insuperable, but certainly challenging. Probably simpler in the context of the extended military responsibility proposed above.

5.State of art

We are seeing the return of hype in AI. Sadly, it seems little was learned from the problems of the previous phase, exacerbated by somewhat impractical hype on ethics.
It is still as much craft as engineering to build responsible systems; there is a real shortage of good design guidance. HMT has been the province of the lab, and has not been translated into anything resembling mainstream system acquisition. Much to do.

Thursday, 3 December 2015

Smart shipping and the human element

Martin Stopford (MS) has written about 'smart' shipping here and here. There is a related article here and videos here and here. He makes a number of important points about seafaring. This article picks up some of these points and responds a) because Martin Stopford's proposal is likely to be influential and b) because it has the potential to be positive for seafarers. This is an opportunity to be grasped with both hands by those concerned with seafarers or the human element.
MS summary: 
Smart shipping would bring about a much greater integration of ship operations with the internet and big data. The Smart Shipping model focuses on the transport performance of the company/fleet as a whole, rather than a collection of individual ships, resulting in wide reaching improvements in transport productivity; safety; personnel development; and logistics. The need is to spend an appropriate amount of money on how assets are going to be used.



Four problems with the existing business are identified; firstly that the technology used was old and economies of scale had been taken to an extreme; second there was a real problem in attracting crew; third a change in the market so that two-thirds of the cargo was controlled by non-OECD countries; and fourth the industry has very weak customer relationships.


1. From Gambling to Management

MS summary:
Shipping needs to move from gambling to management The problem with the bulk shipping business over the last 20 -30 years is its been a gambling business not a management business,”“It's a management solution, you semi-automate ship operations, you semi-automate navigation and you implement door-to-door logistics.". The focus here is on optimizing the overall management of the business by treating the transport performance of ships as a single production unit, like a BMW car plant. The result is QA systems that really work, not a set of manuals nobody consults.
Good management is quite straightforward, and well summarised in Henry Stewart's Happy Manifesto. His Four Steps to Happiness are also good sense. Henry Mintzberg's brief 'Musings on Management' are as relevant as ever, particularly the change from top-down to inside-out.
The challenge for shipping is: Is shipping smart enough to embark on becoming a Wirearchy? “a dynamic two-way flow of power and authority, based on knowledge, trust, credibility and a focus on results, enabled by interconnected people and technology”.
The 'smart' in smart shipping may signal a move to a knowledge economy and intellectual capital.

2. Customer relations and innovation

MS Summary:
“We need to put these things together and squeeze some more value out the transport chain and put a smile on the customer’s face. The customer should not be the person you are beating to death over the negotiating table.”
"The way you treat your employees is the way they will treat your customers" Richard Branson

Eric von Hippel and his team at MIT have made the case for user-centred innovation, where ideas from users and customers are sought and used. 'Lead' users may not just be a source of innovation, they may be the innovators themselves
As regards the necessity for such disruption, do remember, if you don't disrupt yourself, someone else will do it for you.

3. Jobs, Job To Be Done (JTBD), careers, incentives

MS summary:
The need is for a business model which allows employees in a shipping company to be effective as a single team, with better and more rewarding career opportunities for young people and greater integration between ship and shore. Tomorrow's shipping is most of all about people.Better use of people as a resource. Manage ship and shore personnel into a more productive team with better career opportunities. Break down ship-shore barrier, create a team spirit, opportunities for a career. Build a whole new culture. Run the fleet as a team. Experienced engineers ashore, junior ones at sea getting responsibility early, with support. We’ve got 1.7 people onshore, 20 people on the ship, the 20 people on the ship hate the people onshore and the 1.7 onshore think the guys on the ship are a load of idiots. Is that the way to run a business? Integrate the systems and get it to run better.
Automate & de-skill ship operations & navigation: "It's not about the crew, it's about automation of navigation"
The proposal here is very exciting. It will require technology and jobs to be developed together, probably using Human Centred Design (HCD). Demonstrations and trial facilities will probably be needed if STCW is to advance at any sort of pace.
There will need to be a careful watch on incentives that block progress:“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Upton Sinclair

4. Getting there - managing change

MS Summary:
We should not assume it will be easy.
The stakeholder gridlock in shipping will make change difficult. The full horrors of the alternative - of not changing - may not have been thought through yet.
The need for change is clear: "The Internet is nothing less than an extinction-level event for the traditional firm as we have known it for the past 100 years. The Internet makes it possible to create totally new forms of economic entities." Esko Kilpi   Bandwidth limitations have sheltered shipping from the full force of the internet. That protection is ending.
As regards the process of organisational change, it is important to remember Virginia Satir's remark "No one likes to be should upon" For a few resources on change, see here.

5. The 'Human Error' Reduction Fallacy

Unsurprisingly, Martin Stopford has fallen for the 'human error' reduction fallacy being pushed by the 'toys for boys' autonomous ship/car/toaster crowd. However he recognises that driverless ships are a very difficult topic which should only be considered when the industry has much more experience and depth.
"Human error is the symptom of system failure, not the cause" Dan Maurinho "In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid." James Reason
The 'human error' reduction fallacy is an unhealthy way to approach the design of automatic or autonomous systems.It is also likely to miss any big business opportunities that emerge. Human-Centred Automation based on Billings would be the basis of safe and effective operation.

6. Big data and safety

MS Summary:
Centralise analysis. Moving to shipping and maritime economics, this could turn out to be a tremendously exciting era for Maritime analysts.
It is early days, but it seems that, left to itself, big data tends to become big brother, using the panopticon for micro-management. Platform capitalism seems to like moral buffering upstream of the algorithms, with a moral crumple zone for the folk at the sharp end. This is not what shipping needs. It also looks like it isn't enough on its own. Here is an example from healthcare:
"In the real world, a big factor in patient health are social factors like mental health, social isolation, and transportation issues. Since this data is not typically collected, it is largely ignored by Big Data analytics. By collecting this data in a structured way, it can combine with the clinical data to create a truly complete care plan for the patient."
Centralised analysis may be right for economics, but for management, big data needs to be used to support smart people who can appreciate the context. "Appreciate the situation, don't situate the appreciation".

7. Automation to support a new business model.

MS summary:
Smart phone style apps and standard interfaces. De-skilling. Don't just automate navigation, but onboard operations, systems.
Using systems that are familiar to people from the 21st Century sounds a good idea. Folk are going to use that which is easy to use, whether Type Approved or not. Commonality with what comes up the gangway is sensible.
De-skilling by automation has usually been counter-productive. Changing the organisation to give more responsibility to junior people by supporting them with a network of other people plus automation - that is do-able.
Automation and IT need to be seen as oxygen, not lubricating oil. Chris Boorman has a nice piece on the difference between human-centred automation and human replacement automation. "Automation enables enterprises to automate those core processes not to make cuts, but to free up resource to work on new disruptive projects. Faced with an increasingly complex world of technology - cloud, mobile, big data, internet of things - as well as growing consumer expectations, every business needs to turn to automation or perish....Every industry is going through a period of change as new technologies and new entrants look to disrupt the status-quo.  Automation is a key enabler for helping enterprises to disrupt their own industries and drive that change.  Acquiring new customers, retaining customers, driving business analytics, consolidating enterprises following mergers or driving agility and speed are all critical business imperatives.  Automation delivers the efficiency and enables the new way of thinking from your brightest talent to succeed."

8. Setting expectations

As Martin Stopford has recognised, moving to smart shipping is not going to be easy. A detailed passage plan is obviously inappropriate, but some sort of route with easy stages might be welcome.

9. Early actions

Some early actions are obvious:
  • Set the means of achieving scalable learning in place, including trying out some creation spaces.
  • Chris Boorman again:"Automation needs to be ingrained in an organization’s DNA early on and not deployed later as a replacement measure for existing job functions. It should instead be used to allow people and resources to be more focused on driving the business forwards, rather than on just keeping the lights on." The ingraining needs to start now.
  • Rob Miles has proposed levels of enlightenment (later slides in the presentation) as regards integrating safety into business. Smart shipping will need to have some enlightenment, and this will need to include the regulators.
Footnote: I understand the objections to the term 'human element' and sympathise. However, it is the IMO term. If we can use it to convey a Socio-Technical Systems (STS) approach, with a human element and a technical element, then it will do some good. See here for resources on STS.

The gloomy bits: From the CyClaDes EU project on crew-centred design, it has become apparent to me that shipping is far from ready to do crew-centred design - there is a long way to travel for all stakeholders. The 'human error' issue also goes to the heart of matters, from accident investigation through to daily operations.The smart people needed for smart shipping includes all sorts of people. e.g. engine experts who fit the wrong rings in the wrong grooves.

Thursday, 1 October 2015

Automation anxiety


Thoughts before the Big Potatoes event on Automation Anxiety.
Firstly, with no negotiation, everyone interested in the topic needs to watch and read Bruce Sterling on Smart City States. We need to understand the money before we look at the technology. Sterling's book on the Internet of Things is good (also widely available as an eBook).
The best summary of 'the future of work' that I've seen is this by Janna Anderson. Quite long, but as brief as it could be, given the breadth of coverage.
The topic of people and technology has been debated for some long time without much resolution, which must say something. Here is Paul Goodman in 1969 - Can technology be humane?

My first thoughts are in the mind map above, and very gloomy they are.
My work has been concerned with encouraging the adoption of a human centred approach to design and operation, mostly in a technicalcontext . The default approach is human replacement automation, The problems with this have been well-documented at regular intervals, starting with Nehemiah Jordan in 1963, working on SAGE. It is very difficult to shift engineers, their customers, their managers, to a human centred approach. Things are still at the guerilla usability level of warfare, winning small battles slowly. So the Cambrian Explosion of automation coming our way will be annoying and hard to use. We will have micromanagement, BS jobs, etc. Whitlock's very sensible Human Values will continue to be ignored in a transactional economy.
I would contend that platform capitalism, masquerading as the sharing economy , (see here too)  is winning, and that platform cooperativism is not going to catch up, unless a mass of cavalry appears out of the sky. Capital will continue to outperform labour, and hence inequality will grow. Who owns the robots matters a lot, and it isn't looking good for the likes of me.  Your local regulator is going to get crushed.
Secure jobs have gone - join the precariat. The sustainability of professions such as lawyer or surgeon is now under question because of the impact of automation. Maybe one day people will choose to be artisanal surgeons, but the disruption between now and then is going to be a rough ride.
I am aware that Human Resources departments have their limitations, but I fear that people analytics will be worse, and less ethical.
Finally, because of the gloomy nature of my own thoughts, I asked some Scandinavian friends for their views. One working in Norway is up to his eyes in automation / autonomy. His involvement means that the sponsors want a human-centred approach, and his work will deliver this. Not happening in UK/US much, I fear. A Dane with a fairly global perspective sees his industry imbued with some techno-utopian thinking, which it doesn't have the capability to deliver.  A Swede, who was active in Swedish human-centred work is now trying to export this to an Anglo-Saxon economy. She is unsure that the Nordic economies will be able to continue in their human-centred ways and resist the globalisation challenge.

Friday, 25 September 2015

Ergonomics - the taxi driver test

How are we to communicate ergonomics to the population at large? - asks Sarah Sharples, as President of the Chartered Institute of Ergonomics and Human Factors.
My short answer is - I don't try to.
"What is, or are, ergonomics? What is, or are, Human Factors? If ergonomics and Human Factors are the same, then what is "ergonomics AND Human Factors?" These questions - and their answers - confuse people, and rightly so.
Human-Centred Design, on the other hand, enters people's vocabulary on one hearing. Generally, folk are pleased to hear it exists, and annoyed that it is not the norm in equal measure.

I practice communicating Human-Centred Design with the population at large by wearing the jacket in the picture. I forget about the writing on the back, so I am surprised when people in a queue ask me "What is Human-Centred Design?". I have got better at giving easily-understood answers. The guy in the chip shop was up for a long conversation on the merits of early Nokia phones (thank you Timo).

On my business card etc. I describe myself as a People-Systems Integrator, and this seems to be easily understood.

Ergonomics now tries to be a 'discipline' that does 'science' and a 'profession' that does 'practice', and the result is a mess. The explanatory logo at the International Ergonomics Association website has only one test label up front and high-contrast - Human Centered Design.
Most areas of work distinguish professional practice and underpinning science, e.g.
Professional practiceUnderpinning scientific discipline
FarmingAgricultural research
MedicineMedical research, immunology, physiology etc.
ArchitectureArchitectural research
Software engineeringComputer science
1970's: ErgonomicsErgonomics research, human sciences
2015 formal: ErgonomicsErgonomics
2015 IRL: UX, HCD, IA, ErgonomicsHuman sciences, social sciences, design thinking, Ergonomics