Saturday, 4 January 2020

Reflections on 'Four Decades, Four AI Papers'

I really enjoyed reading 'Four Decades, Four AI Papers' by Murray Shanahan. The familiar names evoked memories and provoked ideas. Two observations (NOT criticisms) are relevant here.
  1. How tangential psychology has been to the AI story over this time. Not what I had expected from my undergraduate experience.
  2. The AI development processes over time are completely, conceptually, detached from the development of software tools for human use.
To test these observations, I did some searches on Arxiv CS (Computer Science). For benchmarking, 'psychology'/ 'psychological' got 80 hits, and 'gradient descent' 413.
For 1., a search for 'Gigerenzer' produced two hits, with one of general relevance. There were no returns for 'Turvey'. As a probe into the use of psychology in the CS / AI /robotics world, this is disappointing, to say the least.
For 2., a search for 'Augmented Intelligence' yielded three returns. The Wikipedia article (cf. link) gives a good summary of the long and honourable history of the approach, though the story tails off toward the end. A search for 'Human-Centered' yielded 92 returns (many beyond AI / ML), with a dozen or so of interest. 'Usability' (which also picked up 'usable') yielded 1,316 returns - I did not investigate how many of these were related to AI / ML.

Inferences from this:
As regards 1.,a potentially fruitful two-way exchange seems to have been missed over the last four or five decades. There was some exchange during the expert systems era, but the potential exemplified in 'Expert Judgment and Expert Systems' was hardly explored (AI winter I suppose).

As regards 2., the consequences are more damaging than a missed opportunity. Taking the approach to researching AI, and applying it to practical problems leads to a justified Frankenstein Complex.  The psychology of building "applied AI" seems to come from 'God and Golem' , the 'Megamachine' , or Technetronics. An Augmented Intelligence approach would have pre-empted most of the fruitless discussion over bias, meaningful human control, etc. The AMA seems in favour of the approach. The Knowledge Based Systems community still seems to exist; perhaps a major refresh of KADS is due?

All in all, these decades have been interesting, but have not helped us move to a more humane society.

Tuesday, 17 September 2019

Why we won't get online safety

 Why data breaches won't go away

Why phishing won't go away

"Those who would give up essential usability to purchase a little technical security deliver neither usability nor security."

There is no prospect of us having online safety in the foreseeable future. The diagrams above show the stories around data breaches and phishing.

The infosec world does not have the tools, resources, culture, management, or incentives to fix things. The bad actors carry on getting smarter.

The IT security world as presented to us ordinary users is confusing, inconsistent, and unpleasant.

Several things are clear:
1. The victim-blaming ("silly users with their 1234567 passwords") needs to be challenged at every opportunity. It is unhelpful in the extreme.
2. Digital literacy needs to highlight Michel de Certeau's 'Arts de Faire / Arts of Doing' - ways of reclaiming our autonomy from the panopticon and bad actors.

3. We need tools and resources to fight back / hold our own against the tide of incompetence and malevolence.

4. There is no obvious source for good advice, training, tools, and resources that will be heard above the noise and used at scale.



The Robert Graham Project says "I think the most important security precaution is to lie to computers compulsively". This includes made-up user names, multiple email addresses, fake answers to security questions, and multiple mobile phone numbers. The recent (very good) UK Government guidance gets close but not close enough.

Passwords and Usability

Usability of advice

Much advice on passwords is contradictory, confusing, and context-free. I propose a Scale for Evaluating Password Advice (SEPA)

1. Does the advice give due prominence to haveibeenpwned.com?
2. Is the advice tailored to specific users and contexts (use cases)? (as opposed
    to being generic)
If 'Yes' to question 2:
3. Are there indications that the threat priorities have been based on
    evidence?
4. Are there indications that the risk mitigation actions proposed are
    based on evidence?
5. Are there indications that the advice has been tested with
    representative users?

Password Managers

There are plenty of reviews of password managers. These all seem to focus on technical aspects, with little or no understanding of usability. On the basis of limited personal experience, I suggest the following criteria for password manager usability:
1. The supplier website sets out what use cases it meets, and how, and what use cases it does not support.
2. The manager has a link to haveibeenpwned.com API.
3. The manager generates user-friendly passphrases
4. The manager works without the cloud.
5. The manager helps the user cope with the vagaries of various websites e.g. no paste allowed.
6. The manager is compatible with producing paper storage system.

Use Cases

A collection of not-very-thought-through use cases is below for illustrative (rather than design)purposes:
  • A US Secretary of State who steps out of the SCIF to use her personal Blackberry.
  • A bitcoin miner whose mobile phone account is hijacked to exploit SMS 2FA.
  • A Cambridge Professor of Security Engineering who refuses to use online banking with good reason
"...if you fall victim to an online fraud the chances are you will never see your money again...one of the banks’ most extraordinary feats of recent years has been their ability to shift liability away from themselves and on to the customer – aided by a Financial Ombudsman Service (FOS) that they claim rarely challenges the banks following a fraud."
  • A journalist talking to dissidents in a dangerous country.
  •  Grandma logging into Facebook while staying with her daughter.
  •  Grandma wanting to put her online affairs in order for her estate. 
  • A student wanting to prevent his flatmates using his pr0n account when he is out. 
  • A businessman going to the toilet while doing online business with the free wi-fi in a coffee shop.
  • A Civil Servant wanting to do home banking while at the office.
  • An agency ICU nurse called in at short notice needing to look up patient records. 
  • A homeless person using a mobile phone to claim benefits and pay bills. 
  • Someone on a list entering the USA and being asked to provide their passwords.


Monday, 21 May 2018

Systems of interest for autonomous platforms

The discussion on various autonomous cars, ships, aircraft has generally been focused on the moving platform - by default, or by assumption. It would be helpful to consider the various systems of interest. A very small start has been made by distinguishing UAV and UAS, but we are still at the point where most of the systems of interest do not have names. There are a number of Technical Systems (identified as TSn) and a number of Socio-Technical Systems (identified as STSn). The systems that we need to consider appear to be as follows: 

TS1; The platform (UAV, UUV, driverless rickshaw, robot, etc.). These usually have names.
TS2; The platform plus off-platform 'cloud' (robo-cloud).
TS3; Collective of platforms. Not necessarily a 'swarm' - but a group of platforms working together.
TS4; Collective of platforms plus off-platform 'cloud'.

 STS1; Any of TS1-4 so long as specified plus an 'Operator' who can be held to account. The 'Operator' may be a pilot, Master etc. Near-Real Time situation awareness for the off-platform Operator is only possible with TS2, 4. This system has a name in the case of UAS (only, I think). It is hard to see how a 'car driver' without specialist training can be held to account. The current standard of media reporting has the "driver of a driverless car" being blamed for an accident!

STS2; Any of TS1-4 so long as specified plus Operator, Responsible Owner, Design Authority. This is the minimum system for proper accountability. It needs a name. Note that TS1-4 plus Operator, Responsible Owner has not been included in the list as it is effectively obsolete.

STS3; Any of TS1-4 so long as specified plus Operator, Responsible Owner, Design Authority, Legal Authority, Insurer, Provider of financial legitimacy, Provider of employment and training legitimacy. This is the system that provides the licence to operate. i.e. the 'blunt end' as well as the 'sharp end' in Dave Woods' terminology.

STS4; Typically, 'Cloud' platform operator, financiers.The system that manages the flow of money and information rights. If the 'values' in this system are unethical, then 'value alignment' of TS2, 4 is difficult to achieve (Milo Minderbinder UAVs anyone?). 'Platform' is often used as a shorthand name here, just to add confusion.

Wednesday, 3 January 2018

Getting started with safe internet use

This is written for members of my family who are starting out with PCs on the internet (no comments here about Apple).

Install Webroot WSA - it uses less of your computer power running in the background than other AV tools. Remember to run scans regularly - don't just rely on it working in the background.

Putting your data on a separate 'partition' of the disk to Windows etc. is a good idea and may enable you to recover your data e.g. when Windows dies. If you are new to computers, it is best to get help, even though it is straightforward. This guide seems clear (like much from Tom's Hardware). Make sure your application data (such as email) is on the new partition.

Install CCleaner - this is useful for removing crud from your computer - do regular cleaning, including cookies. It is also good for uninstalling programs, and for choosing which programs you want to run when the computer starts up. CCleaner is also available as a Portableapp (see below).
If you don't want to use CCleaner for some reason, you can set up Windows to remove crud.

This guide is a good introduction to staying safe on the internet.  Read it and then come back here.

Internet security is now so complex, that working out your personal threat profile is probably impossible. Treat security like dieting or exercise. Do the important things first and then keep working at it at a manageable pace.
Now go and read it properly, then come back.

The Robert Graham Project says "I think the most important security precaution is to lie to computers compulsively".  This includes made-up user names, multiple email addresses, fake answers to security questions, and multiple mobile phone numbers. If you think the other side is playing fair, read this (technical, I know).

Security questions are broken - more than you would think. Fake answers (that you record safely!) are becoming essential. The first school Robert Graham went to was &*O)IYHPU&G!!!.
Generating a fake identity can be helped with this.

It is worth remembering that Ross Anderson, Professor of Computer Security at Cambridge, does not use online banking because the risks are all with the customer.

Update: you might want to take the Data De-tox to remove any unwelcome public data about yourself.

1.  Re-using passwords.
Checking for data breaches is very sound advice, and not often given. Do this first and regularly.
Data breach is the main threat for most of us. This is why re-using passwords is such a bad thing.
Google, Facebook, and PayPal seem to take our security seriously. If you can use them as a login, that might reduce the risk. Minimising the number of people with your credit card details is prudent.

Don't let your browser store your passwords and fill in forms - that seems to be broken. See also here. If you have a password manager, it might be as well to disable auto-fill for forms (if it will let you).

My limited experience of a password manager (Dashlane) is mixed.  A book, and a password generator set to give you a readable password (passphrase) might be better to start with. NCSC advice is to use passphrases, 4 random dictionary words or CVC-CVC-CVC style passwords, picked for memorability. Advice from Angela Sasse: "A longer password is preferable overall, but that has its own problems,...More than 50% of passwords are now entered on touchscreen devices, and longer passphrases create a significant burden on touchscreen users.
...Passwords are rarely cracked by brute force. They are mostly captured through phishing and malware, and with those attacks it does not matter how long or complex your password is
."

Unfortunately, password strength meters (things that tell you if your password is weak or strong) are not good indicators of real strength.

Perhaps a Password Manager for all the unimportant sites, and something personal for the vital ones. Using Charles Dickens might (or might not)be helpful. 1Password now checks whether a password has been breached, which is definitely useful. The fact that this is new and novel shows what a way the security industry has to go as regards usability (or utility).

This advice from NCSC is not bad on password re-use.

Mark Burnett‏: "Always remember the three main authentication factors: what can be easily guessed, what can be left in a cab, and what can be chopped off."

Two Factor Authentication (2FA). Yubikey has not got the idea of usability (yet).  This guide  to it was recommended by Zeynep Tufekci. A good idea if you can get the hang of it (not on day one perhaps). Biometrics (including faceID and fingerprintID) look like being more trouble than they are worth, but if long passcodes for a fully-used smartphone are too hard, then they are probably better than nothing. Barton Gellman points out you can make this less painful with an all-numeric PIN: 11 digits or more provide strong security. Big advantage: you get the big-button number pad on the unlock screen. It has to be a truly random number. Your idea of “random” isn’t. [I am not an expert, but I suspect the randomness required and number of digits is a function of how much you are under threat from major adversaries.]

2. Locking phone
How much do you need to use your phone? Maybe it doesn't need to be a repository for your life. Android phones are not a good place to keep secure matters, whereas iPhones can be ok. Maybe you use a cheap feature phone for some / all of the time? Cheap alternative phone numbers sound a worthwhile investment.

SIM hijacking is a real and frightening thing.  My mobile provider (Three) has security questions that you can set up for when they answer the phone. Well worth doing.

4. Adblockers and browsers
Perhaps use Chrome with a particular Google ID for transactions that matter and a different browser for other matters. Your computer may not support two browsers open at once, because they are so resource-hungry. As regards Adblockers, I use Ghostery, happily. Opera has one built in. Chrome is going to get one fairly soon (from Google). Adblockers are worth it. Read Section C from DecentSecurity here.

5. Backups
Using the cloud helps but remember the cloud is short for 'someone else's computer'.  I haven't used Framasoft but it is an alternative to Silicon Valley. Sort out a device to do backups for any data that matter to you. ("you only need to clean the teeth you want to keep" - same thing with data and backups). Freefile sync will give you more capability than you need, is not too hard to learn, and is blisteringly fast. Also available as a Porteableapp. For important items such as photographs, it is worth investing in 'archival Blu-Ray' - an external Blu-Ray read/write drive is not that dear.

6. email accounts
Go and read the advice again.
Email client: You can use webmail for basic purposes just fine, but an email client means you have the emails on your machine (and can back them up). Chaos Intellect is a great client for PCs etc., it can run on a memory stick, and has a great approach to use on phones, It isn't free, but is well worth it - not least for the quality of support. If you are restricted to free, then the Opera email client might be a good choice. If you are into Chat, it can handle that as well.
You will need several email accounts. A Gmail account makes sense, as everyone has one. If your ISP offers email accounts, that is another. Then maybe Yahoo, or Zoho.  If you start to use Zoho seriously, you will need to pay a subscription, but you could do much worse.
@pogue25 recommends using disposable email addresses when you have to give an email address to a site that is bound to spam you.  This generates them.

Update: Ransomware
If you do get stung and locked out of your computer by Ransomware, then it is possible that the keys can be found here.

Applications
MS Office really likes to run macros - the major risk from dodgy email attachments. Unless you really really need it, don't install it. Use LibreOffice instead.
If you are going to be using other people's computers (or the ones at the library) to help you learn, then it may be a good idea to put applications on a memory stick. PortableApps is a bit more complicated to use than having applications installed on the computer - but only a bit - It reduces the demands on the computer and allows you to take just a stick with you. You can also install the Opera browser on a memory stick (or on your computer). It is pretty good, and then it would mean that your bookmarks travel with you easily. PortableApps include the Opera browser and email client but since these are the main applications, it might be worth installing them on the USB stick directly.

Photos on social media
The pace of facial recognition on social media is alarming. The Spartacus Hack may well be worth doing. Just put up some misleading pics and labels.

Further reading
There is good advice here and here. The do's and don't's here are for folk at higher risk than many (including break-ups and stalking), but the more you follow it, the safer you'll be. Advanced material here.

Sunday, 17 December 2017

Does Autonomous = Small?

The Clyde Puffers had a crew of 3 and capacity of about 6 TEU

Wage bills have been a factor driving for ever-larger lorries and container ships. The transport companies have successfully externalised the  knock-on costs of ever-larger ports, depots, and warehouses, and the impact on city streets. Removing the wages bill could open the way to a radical reduction in size. The perennial problems of inter-modality could perhaps also be eased. Changing the scale of logistics could open the way to better 'last mile' operations.

Using cargo bikes to replace or complement vans is an example of the scope for changing scale, and thought is being given e.g. here for the need to standardise small containers (no I'm not proposing autonomous cargo bikes for city centres). Such containers will hopefully be compatible with urban mobility platforms on the lines of M.U.L.E (not the US military MULE project).

Thanks to  @thinkdefence there is a discussion of small container standards; see the section on JMIC. These would be great for mobility platforms but are not for cargo bikes.

The huge electric autonomous trucks being investigated in the USA may have a long-haul role there, but perhaps the real market is for something much smaller.

If delivery drones are ever to gain scale, there needs to be standardised landing pads, preferably palletised and compatible with small scale standard containers e.g. biscuit tins

More speculatively, we can envisage a 21st Century replacement for the Clyde Puffer; small Autonomous Ro-Ro vessels (Damen have some starting points), some Mexeflote where local infrastructure is missing, and M.U.L.E like platforms to local depots.

Operations at this more human scale are likely to be more sustainable, and with lower knock-on costs. The trick will be getting the incentives right for it to happen, supported by timely standardisation.

Monday, 27 November 2017

Turning 'Meaningful Human Control' into practical reality

The Fake News

The ambiguity in 'Meaningful Human Control' (MHC) may have been good for generating discussion but it is no good for system design or operation. Rules Of Engagement are bad enough without adding more ambiguity. Some folk seem surprised that 'ethics' needs converting to a technical matter - how else do they think 'ethics' will be implemented at design or run time? The legal viewpoint is not the only one that matters, and expertise in design, support, operation, training, seems thin on the ground to date. This post attempts to make a start on describing the way ahead and practical issues to be faced.


Doug Wise, former Deputy Director, Defense Intelligence Agency “There are human beings that actually fly the MQ-9 drone – people are actually observing and make the decisions to either continue to observe or use whatever is the lethality that is inherent in the platform. There are human beings at every stage. Now lets assume that at some point the human beings release the platform to act on its own recognizance, which is based on the basic information on the payload that it carries and the information that it continues to be updated with. Then it is allowed to behave in a timescale to take data, process it, and make decisions and act on those decisions. As the platforms become more sophisticated, our ability to let it go will become earlier and earlier.” There will be people involved in all stages of the killer robot lifecycle.  The discussion around killer robots, like the discussion around other autonomous platforms, has an unhelpful focus on the built artefact - the robot itself. As UNIDIR has pointed out, a 'system of systems' approach is needed.

The Good News

"What assurances are there that weapon systems developed can be operated and maintained by the people who must use them?" This question, from Guidelines for Assessing Whether Human Factors Were Considered in the Weapon Systems Acquisition Process FPCD-82-5, US GAO, 1981, might be a more useful framing. Assurance requires a combination of inspecting the design, evaluating performance, and auditing processes (for design, operation etc.). Many military systems need something resembling MHC - aircraft cockpits, command centres etc. In fact it is hard to think of a system that doesn't. Not surprisingly, therefore, there is a considerable body of expertise in Human System Integration (HSI) aimed at providing assurance of operability.

Quality In Use (QIU) is defined as:The degree to which a product or system can be used by specific users to meet their needs to achieve specific goals with effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use. ISO 25010 (2011). The term is part of a well-formed body of quality and system engineering standards (civil and military) aimed at providing assurance of QIU. In practical terms, this approach is the way ahead (because it exists). Pre-Contract Award Capability Evaluation is likely to be the a useful tool in helping to build and operate systems with MHC.

The Bad News

The reason most people do not recognize an opportunity when they meet it is because it usually goes around wearing overalls and looking like Hard Work.” Henry Dodd
Reliance on coming up with a good definition of  MHC won't work for the folk at the sharp end of killer robot operation. The test of whether good intentions have translated into good deeds will be after things have gone wrong. There is a need to improve military accident investigation (with some notable exceptions). Unless there is good Dekker-compatible practice for accident investigation of smart systems and weapons, more good folk who put their lives on the line their country are going to be used as fall guys. Mock trials with realistic case material would be a good start - overdue really. Sensible investigation of the 'system of systems' is bound to find shortfalls in numerous aspects of both human and technical design and operation. Looking for clear human/machine responsibilities at the sharp end is no more than scapegoating.

It’s generally hopeless trying to clearly distinguish between automatic, automated and autonomous systems. We use those words to refer to different points along a spectrum of complexity and sophistication of systems. They mean slightly different things, but there aren’t clear dividing lines between them. One person’s “automated” system is another person’s “autonomous” system. I think it is more fruitful to think about which functions are automated/autonomous.” Paul Scharre. The critical parameter for automatic / autonomous is 'context coverage' which considers QIU in both specified contexts of use and in contexts beyond those initially explicitly identified. For autonomous vehicles, it is becoming recognised that the issue is not 'when' but 'where'. A similar situation will continue to apply to smart weapons. The safe and legal operation of smart weapons will remain context-dependent.

'Ordinary' automation is usually done badly, and has not learned the Human Factors lessons proffered since the mid-1960's. There are many unhelpful myths that continue to bring more bad automation into operation e.g. 'allocation of function', 'human error', 'cognitive bias'.  Really, MHC of ordinary automation is far from common.

HSI is practiced to a much more limited degree than it should be, so the pool of expertise is smaller than would be needed. The organisational capability to deliver or operate usable systems is very variable in both industrial and military organisations. Any sizeable switch to 'Centaur' Human-Autonomous Teamwork will hit cultural, organisational, and personnel obstacles on a grand scale.
The current killer robot exceptionalism will be unhelpful if it proves to be a deterrent to applying HSI, or if it continues to be a distraction from the wider problems of remote warfare now we have said Goodbye Uncanny Valley.

Back in the days of rule-based Knowledge Based Systems, the craft of the Knowledge Engineer involved spending 10% of the time devising an appropriate knowledge representation and 90% of the time trying to convince engineers that the human decision making approach was not flawed but contained subtleties that allowed adaptation to context, and that the proposed machine reasoning was seriously flawed. With the current fashion of GPU-powered Machine Learning (ML), this may not be possible. Further, XAI (explainable AI) is a long way from a proven remedy for the opaque nature of ML  ML can be brittle and fail in unexpected ways; The claim that the X part of the system will be able to generate an explanation under this circumstance is an extraordinary claim without extraordinary evidence.