Tuesday, 14 February 2017

Getting started with a password manager and 2 Factor Authentication (2FA)


BLUF;A password manager offers some potentially useful/ essential functionality. It is a technical tool, not a solution, and certainly not a panacea. It is impossible for the user to estimate the risks associated with its use, making comparisons with other approaches difficult. Yubikey U2F as an approach to 2FA has some way to go before it can be considered a usable tool for the individual. There are no particular grounds for believing the necessary progress will be made. More generally, will automation bring us usable security in the near future? Probably not; the forces against it are too big, and there is a pretty complete lack of good tools and guidance that start with the context of use.
I picked Dashlane. Why Dashlane? U2F with Yubikey got high praise from people I respect on security. Dashlane has an edge over its competition in this regard, and it has good reviews for being a usable password manager.
The first interaction with Dashlane is fraught with problems; it asks you to create a strong password. "WTF, I thought that was your job". It does NOT tell you that you are creating your Master Password (capital M capital P). Given that 'creating a strong password' is an extremely difficult thing (and the reason for buying a password manager), and this is the one password to rule them all, there needs to be considerable user support here.
As reported here "It's worth noting, however, that just like any software, password managers are vulnerable to security breaches. In 2011, LastPass experienced a security breach, but users with strong master passwords were not affected."

"The automation's fine when it works" Margareta Lützhöft.
After trying some unimportant passwords, I tried to use Password Changer; it turns out this is a utility that only works with some websites and none of the ones I had tried. For password managers (and 2FA) to work effectively, there needs to be some standardisation in the infrastructure, which is unlikely to happen quickly. To a novice user, "credentials not supported" is a meaningless message.
Once you have passwords generated by Dashlane, there is a sense of complete dependency on the machine. Quite a wrench. The other scary thing is the loss of physical security at the computer; get distracted and the kids are into Amazon, the flatmates are watching your pr0n etc. On a Windows machine, there is no visible status indication of whether Dashlane is active or not. There are settings to simplify logging out, and to adjust the inactivity time before it switches off, but there are contexts of use where it may still be a risk. Logging out regularly and logging in with secure passwords is a feature of working on secure networks, but is probably not a habit most folk have.
There are some quirks. On a finance website where I had what I thought was a strong password, this seemed to confuse Dashlane, and it didn't add the password. On a shopping site with a weak password, I went to the 'change password' dialogue on the site. Dashlane didn't offer me the option for it to create a strong password. There is no generic function to generate a strong password on request.
When putting in a wrong password that a site rejects; Dashlane offers to save it. On subsequently entering the right password, Dashlane doesn't re-offer. It does, of course, provide distracting alerts just at a time of anxiety and uncertainty.
Dashlane auto-fill opts to 'stay signed in' on say Ebay, which I don't want.
I went to change my Google password; successfully (I think) got Dashlane to enter a strong password. Dashlane then offered 'replace' and 'save as new' as options, with the latter as default. I took the default option, which was wrong. Why might I want 2 Google passwords for the same account?
Sometimes Dashlane would appear at a PayPal checkout, and sometimes not. Workarounds when it doesn't are a) log in to PayPal separately or b) use the Dashlane control panel to save the password.
There have been some anomalies that might be me or Dashlane.
Dashlane runs in the background when not logged in (using 224MB) and offers to save passwords entered manually. I don't see any risk from this, but I'm not an expert.
Finance sites with customer numbers and arrangements where you enter specified parts of the password seem to defeat Dashlane, not surprisingly.
The assessment of password strength by Dashlane is a black box to the user. My suspicion is that it is aimed at brute force attacks. Changes to a password that don't add much entropy can make a big difference in estimated strength, speaking as a complete beginner.
The Dashlane website offers: " Get security alerts sent straight to your device when any of your accounts may be compromised. Update your old password & stop hackers in their tracks." I don't know where they are going to get their data from, but I am not optimistic. It sounds like an invitation to sue them when they miss one.
As regards backups, Dashlane offer this "If you disabled Sync in Dashlane and would like to back up your data – to be sure not to lose anything in case your computer stops working – , we recommend using the Dashlane secure archive format. When using this format, all your Dashlane data are saved into one simple archive file protected by your master password. Keep this file in a safe place (on a USB key or on an external hard drive) and make regular backups. Note that you will have to use Dashlane again to import and restore your data from this file. Keep in mind that Dashlane will always remain free to use, so it should not be a problem!"  It is possible to avoid complete machine dependency by exporting and/or printing the passwords and other data stored in Dashlane. Dashlane offer this advice: "Excel and CSV exports are unsecured and that it is not a safe way to keep a back-up of your data. We strongly recommend that you delete these exports as soon as you are done with them....If you prefer to print it to keep a hard copy of your data, you can also export it in Excel or CSV format. Remember to keep this in a safe place!"
Starting to use a password manager after some years of internet use does not produce instant security, but it does provide the means for making steady improvement.

Yubico don't do usability.This Amazon review captures the heart of the matter qute well. This getting started article illustrates the required level of geekiness.
My hopes for Dashlane with Yubikey were dashed.
The video here and accompanying text make it all look so easy and effective. Alas, Dashlane was not telling me very much of the truth - the video is a lie, basically. If Dashlane and Yubikey want my trust, then they need to become trustworthy. To get started with Yubikey as 2FA for Dashlane, you first install an app such as Authy on your phone (this requires SMS 2FA) and then show the Dashlane QR code to the phone and enter the resulting code. All do-able given time, but I had to ask Dashlane support several times to have this explained as it is not on the website. The loss of trust was considerable. The expansion in security related infrastructure was unwelcome and cannot be good. No explanation or rationale was offered.  This page assumes that Yubikey is being added to a 2FA app - weird.
Before you start, you need to decide whether to use 2FA only when using your Dashlane account on a new device, or every time you log in. If you change your mind, you have to go through the exercise again. The logic for 2FA on a new device is presumably to counter the risk of someone gaining your Master Password and logging in from somewhere else. The logic for using it every time you log in is less clear. Dashlane say "Use 2-Factor Authentication for maximum protection. 2-Factor authentication is the ultimate security mechanism as it requires you to validate, or authenticate, your identity on a 2nd device before being granted account access."I am not sure in what contexts that matters. I had expected to use it for every password use, to remove the problem of password theft. Not an option as things stand. The pervasive lack of risk-driven information on computer security includes that supplied by Dashlane. Doesn't the computer security community understand how to use risks? Apparently not.
I hadn't expected Yubikey to be a replacement for the little keypads supplied by banks, and sure enough it isn't. I see HSBC is now offering the nightmare of speaker recognition as an option. The reviews of Authy on Play Store were sad to read; lots of folk wanting it to use fingerprints.


In summary, U2F looks like a busted flush and will join PGP as a niche interest. Shame, it could have been a contender. Password managers seem unavoidable as a partial solution, and can be an aid to containing the risks of computer security. Their vulnerability to keylogging is bound to make keylogging a bigger threat; I don't know what we do then to stay one step ahead. For many contexts of use, countering the increased physical risks need more support than Dashlane provides.
For sites offering Yubikey as a form of 2FA (Google, Facebook), I have not been prompted by Dashlane to add 2FA. I haven't investigated using Yubikey separately from Dashlane as yet. I find it hard to do the risk assessment if I now have an 'unhackable' password.

Update: Dashlane now using 36% CPU of an i5 laptop and no longer working on Firefox.

Passwords and usable security

Some notes on my exploration of password usability, password managers and Two Factor Authentication (2FA).
It appears we have a problem.
"Passwords are the most prevalent form of authentication in the digital age, and are the first line of defense against unauthorized access in most systems. Even if you are using some other form of authentication for a particular service, there’s still a password in the chain somewhere — it all comes back to relying on something somewhere being password-protected. But after 50 years of computing evolution, 123456 and password still top the list of most frequently used passwords. More than a billion passwords have been compromised in 2016, and we’ve seen breaches from companies such as Adobe, Twitter, Forbes, LinkedIn, Yahoo, LivingSocial, and Ashley Madison over the past years. Clearly, we have a systemic problem with password authentication – and it’s not going away any time soon."
We could Just give up: 123456 is still the world's most popular password.
We could: Follow the money -  Ross Anderson:
"Systems are often insecure because the people who guard them, or who could fix them, have insufficient incentives Bank customers suffer when poorly-designed bank systems make fraud and phishing easier. Casino websites suffer when infected PCs run DDoS attacks on them. Insecurity is often what economists call an ‘externality’ – a side-effect, like environmental pollution"
We should start with Bruce Schneier. Why are we trying to fix the user instead of solving the underlying security problem? "We must stop trying to fix the user to achieve security. We'll never get there, and research toward those goals just obscures the real problems. Usable security does not mean "getting people to do what we want." It means creating security that works, given (or despite) what people do." John Podesta could not have used 'password' for his google email account because google won't let folk do it.

The threats

What are the threats to passwords? UK government guidance has the following:
Approaches to discovering passwords include:
  • social engineering eg phishing; coercion
  • manual password guessing, perhaps using personal information ‘cribs’ such as name, date of birth, or pet names
  • intercepting a password as it is transmitted over a network
  • ‘shoulder surfing’, observing someone typing in their password at their desk
  • installing a keylogger to intercept passwords when they are entered into a device
  • searching an enterprise’s IT infrastructure for electronically stored password information
  • brute-force attacks; the automated guessing of large numbers of passwords until the correct one is found
  • finding passwords which have been stored insecurely, such as handwritten on paper and hidden close to a device
  • compromising databases containing large numbers of user passwords, then using this information to attack other systems where users have re-used these passwords.
It has been pointed out that this does not include " data breaches. No matter how good a password if the attackers bypass it by stealing personal data from poorly-protected databases the technology becomes powerless. It is ridiculous that passwords and credit card numbers are encrypted but people’s personal data usually isn’t. Passwords are only one part of the issue."
Good real-world advice on threats for ordinary folk is to be found here:
There are a few ways your account passwords can be compromised.

  • Someone's out to get you. There are many people who might want to take a peek into your personal life. If these people know you well, they might be able to guess your e-mail password and use password recovery options to access your other accounts.
  • You become the victim of a brute-force attack. Whether a hacker attempts to access a group of user accounts or just yours, brute-force attacks are the go-to strategy for cracking passwords. These attacks work by systematically checking all possible passphrases until the correct one is found. If the hacker already has an idea of the guidelines used to create the password, this process becomes easier to execute.
  • There's a data breach. Every few months it seems another huge company reports a hacking resulting in millions of people's account information being compromised. And with the recent Heartbleed bug, many popular websites were affected directly.
The risks to the user clearly depend on the context of use. This does not seem to be considered in the literature. Possible use cases could include:
  • A US Secretary of State who steps out of the SCIF to use her personal Blackberry.
  • A bitcoin miner whose mobile phone account is hijacked to exploit SMS 2FA.
  • A Cambridge Professor of Security Engineering who refuses to use online banking with good reason
"...if you fall victim to an online fraud the chances are you will never see your money again...one of the banks’ most extraordinary feats of recent years has been their ability to shift liability away from themselves and on to the customer – aided by a Financial Ombudsman Service (FOS) that they claim rarely challenges the banks following a fraud."
  • A journalist talking to dissidents in a dangerous country.
  • Grandma logging into Facebook while staying with her daughter.
  • Grandma wanting to put her online affairs in order for her estate.
  • A student wanting to prevent his flatmates using his pr0n account when he is out.
  • A businessman going to the toilet while doing online business with the free wi-fi in a coffee shop.
  • A Civil Servant wanting to do home banking while at the office.
  • An agency ICU nurse called in at short notice needing to look up patient records.
  • A homeless person using a mobile phone to claim benefits and pay bills.
  • Someone on a list entering the USA and being asked to provide their passwords.
The threat is clearly feasible. How I became a password cracker shows this.
"At the beginning of a sunny Monday morning earlier this month, I had never cracked a password. By the end of the day, I had cracked 8,000. Even though I knew password cracking was easy, I didn't know it was ridiculously easy—well, ridiculously easy once I overcame the urge to bash my laptop with a sledgehammer and finally figured out what I was doing."
For cracking experts, it is frighteningly easy:
The ease these three crackers had converting hashes into their underlying plaintext contrasts sharply with the assurances many websites issue when their password databases are breached. ...The prowess of these three crackers also underscores the need for end users to come up with better password hygiene. Many Fortune 500 companies tightly control the types of passwords employees are allowed to use to access e-mail and company networks, and they go a long way to dampen crackers' success.

"On the corporate side, its so different," radix said. "When I'm doing a password audit for a firm to make sure password policies are properly enforced, it's madness. You could go three days finding absolutely nothing."... As Ars explained recently, the problem with password strength meters found on many websites is they use the total number of combinations required in a brute-force crack to gauge a password's strength. What the meters fail to account for is that the patterns people employ to make their passwords memorable frequently lead to passcodes that are highly susceptible to much more efficient types of attacks.

"You can see here that we have cracked 82 percent [of the passwords] in one hour," Steube said. "That means we have 13,000 humans who did not choose a good password." When academics and some websites gauge susceptibility to cracking, "they always assume the best possible passwords, when it's exactly the opposite. They choose the worst."

The state of guidance

I looked around for guidance that ordinary non-geeky folk might find and use. The state of guidance is Hmmm. A critical issue is lecturing folk about 'strong passwords'. Given the material above, what would a strong password look like? Some serious explaining is required. From my beginner situation, this and this from Good Housekeeping aren't great, and neither is this from Saga.
This looks good from CNET - but would folk find it?
This from Money Saving Expert has some interesting points, but it is hard for the lay person to evaluate the differences from other experts. The material from GetSafeOnline makes some assumptions about strong passwords, but has good points. This from the BBC has advice from Angela Sasse but is likely to be filed under "too difficult". All in all, the CNET advice looks good to me, but there is a real paucity of well-informed actionable advice (apart from what folk might find by Bruce Schneier).
I leave the last words to Eleanor Saitta ‏@Dymaxion "... Increasingly believe teaching security tools without a comprehensive systems literacy foundation is harm reduction at best, maybe harmful".

Update: Good material from Google here

Sunday, 2 October 2016

Ergonomics of the Internet of Things - 1


BLUF; The Internet of Things (IoT) will have lots of small, low-powered devices with complex functionality. This is a challenging context for good ergonomics. By and large, the IoT community won't even try; they'll take the same approach to usability as they have to security. Inability to power usable file formatting is a perhaps obscure example, but a good one. Message to the engineers: Just don't do it.

My Sony Walkman mp3 player has given good service over the years, but is not quite functioning correctly any more, and my music collection has grown well beyond its capacity. So the prospect of a small cheap mp3 player that takes a large capacity MicroSD card was too tempting. Bought.

Unusable; the tracks would not play in the right order when copied over from my PC.

MP3 file tagging is quite hard to get right, and it matters, especially for classical music (the bulk of my files).
What follows is  a bit of background and a summary of what I had to do to fix it. It is the result of a good bit of digging around and trial and error. Even if decent instructions came with the device, it is too big a demand to make of the user that just wants to listen to music. The engineers who thought that they had made an acceptable compromise in the interests of a low-power device were wrong.

In FAT32, filenames are written in the File Allocation Table with a creation date/time and the mp3 player reads the FAT and shows the files in the order they were written to the disc. It makes it more difficult when you want to view or play files in alphabetical or numerical order. Windows applications can sort the files on name and replay them in sequence but small devices such as mp3 players are more limited because of their low power constraints apparently.

I used Drivesort after trying some other applications to sort the files on the MicroSD card into alphabetical and numerical order. I see other people have had problems with Drivesort, but it is free and it worked for me. I used a mixture of Long Name Sort and Short Name Sort. I had to do it folder by folder, which was pretty tedious. There is a subdirectory function but I couldn't get it to work.
My MicroSD card came in exFAT format, so I had to format it to FA32 before I could use Drivesort. Windows wouldn't do it, so I used guiformat  (free but Paypal donations welcome).


After the event for me, I hope, but this looks a useful resource on file sorting for mp3 players

Thursday, 22 September 2016

What autonomous ships are really about

It is easy for technical people (and others) to take technical ideas at face value. These days, this is often a serious mistake. In his brilliant monograph 'The Epic Struggle for the Internet of Things', reviewed here, Bruce Sterling puts us right. Google spent billions buying Nest, not for its thermostat technology, but to stake a claim in home automation.A technical device can be used to support a narrative for the future. So it is with autonomous ships.
First a small paleofuture exercise. Go back five or six years. What was 'the future of shipping' then? Perhaps, how big container ships might get, what alternative fuels might we be using. Look at the ships in this piece by Wärtsilä  from 2010 about shipping in 2030. Those ships have bridges and people. No mention of autonomous ships by anybody, probably.
Now, to the present. Go to any shipping event and ask about 'the future of shipping'. Autonomous ships will be mentioned within the first three sentences, and Rolls-Royce will be named. Rolls-Royce has put itself at the centre of the dominant narrative for the whole industry. This positioning is worth a fortune, and RR has done it at almost zero cost. A contribution to an EU research project or two, some press releases, some snazzy graphics from Ålesund - pennies. The autonomous ship has been the device used to stake that claim. Please don't mistake it for a technical exercise.

Friday, 17 June 2016

Pre-empting Hindsight Bias in Automated Warfare

"His Majesty made you a major because he believed you would know when not to obey his orders." Prince Frederick Karl (cited by Von Moltke)

The killer robot community is debating concepts such as Meaningful Human Control (MHC) and 'appropriate human judgment' with a view to their operationalisation in practical use. For the purpose of this post, the various terms are bundled into the abbreviation MHC.

After things have gone wrong, the challenge for incident analysis is to avoid 'hindsight bias'. To learn from an incident, it is necessary to find out why it made sense at the time. "to reconstruct the evolving mindset", to quote Sidney Dekker. There is a long history of the wrong people getting the blame for an incident - usually some poor soul at the 'sharp end' (Woods).

In a world of highly automated systems, the distinction between 'human' and 'machine' becomes blurred. In most systems, there are a number of human stakeholders to consider, and a through-life perspective is frequently useful.

In a combat situation, 'control' is an aspiration rather than a continuing reality, and losers will have lost 'control' before the battle - e.g. after the opponent has got inside their OODA loop. What is a realistic baseline for MHC in combat? We have to be able to determine this without hindsight bias.
How would an investigator determine the presence or absence of MHC in the reconstruction of an incident? It would be virtue signalling of the lowest order to wait until after an incident and then decide how to determine the presence or absence of MHC.

One aspect of such determination is to de-couple the decision making from outcomes. The classic paper on this topic is '“Either a medal or a corporal”: The effects of success and failure on the evaluation of decision making and decision makers' by Raanan Lipshitz
There is, of course, a sizeable literature on decision quality e.g. Keren and de Bruin.


The game of 'consequences' developed here has been to provide food for thought, and an aid to discussion on what an investigator would need to know to make a determination of MHC.  It comprises short sections of dialogue. The allocation of function to human or machine, and the outcomes, are open to chance variation.
The information required to determine MHC might help in system specification, including the specifics of a 'human window'. It is not always the case that automation provides such a window - especially in the case of Machine Learning. So, how do we determine MHC in a combat situation? Try some of the exercises and see how much you would need to know. If the exercises here don't help make a determination - what would?

Please let me know in comments below, or on Twitter @BrianSJ3

As an aside, there are proven approaches to take in system development that can provide assurance of decision quality. This is not entirely a new challenge to the world of Human-System Integration. "What assurances are there that weapon systems developed can be operated and maintained by the people who must use them?"
[Guidelines for Assessing Whether Human Factors Were Considered in the Weapon Systems Acquisition Process FPCD-82-5, US GAO, 1981]

Sunday, 3 January 2016

Human aspects of automation - The 'Jokers'

I propose four 'Jokers' to be considered in the design and operation of automated / autonomous systems. These are not 'risks' as normally managed, though there may be ways of managing them for people who have taken the red pill. The Jokers are:
  • Affect dilemma: Users WILL attribute a personality to your system and act on it, which may or may not match the behaviour of the system.
  • Risk compensation: Users WILL use systems installed for safety purposes to achieve commercial gain.
  • Automation bias: Users WILL trust the system when they shouldn't.
  • Moral buffering: Remoteness brings moral and ethical distance. Users WILL become morally disengaged.
The Jokers need to be addressed during design and operation. There are no simple means of 'mitigating' or 'treating' them.   To a large extent, engineers have got away with minor informal treatment of the (unrecognised) Jokers. This won't be possible with Robotics and Autonomous Systems.

Affect dilemma

Whether you intend it or not, your computer will be assigned a personality by its users e.g. Tamagotchi effect.  This doesn't just apply to social robots; nuisance alarms and other such 'technical' features will be used by the users in assigning a personality to the computer, and this will drive their interaction with it. This seems to be an area well short of having 'best practice' and may just need lots of monitoring, with corrective action where possible. Giving the interface personality human values sounds a good start.

Risk compensation

Wikipedia has a good entry on risk compensation. Despite being a well-accepted phenomenon, I have yet to encounter its explicit treatment in design, operation, or regulation. I should be delighted to hear of its appearance in a single safety case. 'Shared Space' stands out as a cultural oddity.
Risk compensation triggered by regulation is termed the Peltzman Effect.
[Note: Wilde's risk homeostasis is not being discussed here.]

Automation bias

"The automation's fine when it works" Margareta Lützhöft. Problems can arise when it doesn't. The reliability of modern automation means that it makes sense for the user to rely on it without checking. A summary from a paper by Missy Cumming:
"Known as automation bias, humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct (Mosier & Skitka, 1996; Parasuraman & Riley, 1997).  Automation bias is particularly problematic when intelligent decision support is needed in large problem spaces with time pressure like what is needed in command and control domains such as emergency path planning and resource allocation (Cummings, 2004). Moreover, automated decision aids designed to reduce human error can actually cause new errors in the operation of a system.  In an experiment in which subjects were required to both monitor low fidelity gauges and participate in a tracking task, 39 out of 40 subjects committed errors of commission, i.e. these subjects almost always followed incorrect automated directives or recommendations, despite the fact that contraindications existed and verification was possible (Skitka et al., 1999). "
Kathleen Mosier has shown that automation bias is surprisingly resistant to extra users or training, and that automation can lead to new, different types of error. AFAIK, automation bias is not addressed in Human Reliability Analysis, or explicitly addressed in design or operation. It is recognised as a concern in reports by the CAA and Eurocontrol.
The blame-the-human language of over-reliance is unwelcome but unsurprising. It begs the question of what would be optimal reliance. “The reason that current research does not unequivocally support the presence of complacency is that none of the research known has rigorously defined optimal behaviour in supervisory monitoring” (Moray & Inagaki, 2000)
Measures of trust, including trustworthiness, trustedness, trust miscalibration may need to be part of the answer. The Yagoda trust scale is of potential use in this context.
It could reasonably be argued that automation bias is a consequence of the affect dilemma. My grounds for having two separate Jokers is that, even when not independent, they are  separate concerns from a design or operational point of view.

Moral buffering

Dumping your boyfriend by text message. Letting people go by email. "Distant punishment" in modern warfare. Moral buffering. The moral buffer is described by Missy Cummings.
"The concept of moral buffering is related to but not the same as Bandura's (2002) idea of moral disengagement in which people disengage in moral self-censure in order to engage in reprehensible conduct. A moral buffer adds an additional layer of ambiguity and possible diminishment of accountability and responsibility through an artifact or process, such as a computer interface or automated recommendations. Moral buffers can be the conduits for moral disengagement, which is precisely the reason for the need to examine ethical issues in interface design."
People can exploit moral buffering to generate the 'Agency Problem' as set out by Nassim Nicholas Taleb:
"Solution to the AGENCY PROBLEM: Never get on a plane unless the person piloting it is also on board.
Generalization: no-one should be allowed to declare war, make a prediction, express an opinion, publish an academic paper, manage a firm, treat a patient, etc. without having something to lose (or win) from the outcome
."
Taleb links the agency problem to 'skin in the game'.
A classic demonstration of moral buffering is the 'Button Defense' in 'How To Murder Your Wife' - "Edna will never know".


The Jokers are due to appear in a paper in the Safety Critical Systems Club Newsletter, which will give them a proper citation. To be added when published this month.
There is some overlap between the Jokers and BS8611. To be the subject of a future post.

Wednesday, 30 December 2015

Providing assurance of machine decision making

All Models Are Wrong But Some Are Useful” -George Box

The aim of Human-Machine Teams (HMT) is to make rapid decisions under changing situations characterised by uncertainty. The aim of much modern automation is to enable machines to make such decisions for use by people or other machines. The process of converting incomplete, uncertain, conflicting, context-sensitive data to an outcome or decision needs to be effective, efficient, and to provide some freedom from risk. It also may need to reflect human values, legislation, social justice etc. How can the designer or operator of such an automated system provide assurance of the quality of decision making (potentially to customers, users, regulators, society at large)? 'Transparency' is part of the answer, but the practical meaning of transparency has still to be worked out.

The philosopher Jurgen Habermas has proposed that action can be considered from a number of viewpoints. To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental action and strategic action. Strategic action is part-technical, part-social and refers to the decision-making procedure, and is a the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing by values and maxims. It may be that Value Sensitive Design forms a useful supplement to Human-Centred Design to address values.

The Myth of Rationality

"Like the magician who consults a chicken’s entrails, many organizational decision makers insist that the facts and figures be examined before a policy decision is made, even though the statistics provide unreliable guides as to what is likely to happen in the future. And, as with the magician, they or their magic are not discredited when events prove them wrong. (…) It is for this reason that anthropologists often refer to rationality as the myth of modern society, for, like primitive myth, it provides us with a comprehensive frame of reference, or structure of belief, through which we can negotiate day-to-day experience and make it intelligible."
Gareth Morgan

The Myth of Rationality is discussed e.g.  here.  The limits of rationality (or perhaps its irrelevance) in military situations  should be obvious. If you need a refresher, then try Star Trek 'The Galileo Seven'. The myth of the rational manager is discussed here. This is not to say that vigilant decision making is a bad thing - quite the opposite. As Lee Frank points out, rationality is not the same as being able to rationalise.

The need for explanation / transparency

The need for transparency and/or observability is discussed in a previous post here. There is an interaction between meeting this need and the approach to decision making. AFAIK the types of Machine Learning (ML) currently popular with the majors cannot produce a rationalisation/explanation for decisions/outcomes, which would seem a serious shortcoming for applications such as healthcare. If I am a customer, how can I gain assurance that a system will give the various users the explanations they need?

Approach to decision making

It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits; it is evidently equally foolish to accept probable reasoning from a mathematician and to demand from a rhetorician scientific proofs.” Aristotle
At some point, someone has to decide how the machine is to convert data to outcomes (what might have been called an Inference Engine at one point). There is a wide range of choices; the numeric/symbolic split, algorithms, heuristics, statistics, ML, neural nets, rule induction. In some cases, the form of decision making is inherent in the tool used e.g. constraint-based planning tool, forward-chaining production system, truth maintenance system etc. There are choices to be made in search (depth vs. breadth) and in types of logic or reasoning to be used. There were attempts before the AI winter to match problem type to implementation but IMHO they didn't finish the job, and worked-up methodologies such as CommonKADS would be a hard sell now. So, what guidance is available to system designers, and what forms of assurance can be offered to customers at design time? Genuine question.