Wednesday, 3 January 2018

Getting started with safe internet use

This is written for members of my family who are starting out with PCs on the internet (no comments here about Apple).

Install Webroot WSA - it uses less of your computer power running in the background than other AV tools. Remember to run scans regularly - don't just rely on it working in the background.

Putting your data on a separate 'partition' of the disk to Windows etc. is a good idea and may enable you to recover your data e.g. when Windows dies. If you are new to computers, it is best to get help, even though it is straightforward. This guide seems clear (like much from Tom's Hardware). Make sure your application data (such as email) is on the new partition.

Install CCleaner - this is useful for removing crud from your computer - do regular cleaning, including cookies. It is also good for uninstalling programs, and for choosing which programs you want to run when the computer starts up. CCleaner is also available as a Portableapp (see below).
If you don't want to use CCleaner for some reason, you can set up Windows to remove crud.

This guide is a good introduction to staying safe on the internet.  Read it and then come back here.

Internet security is now so complex, that working out your personal threat profile is probably impossible. Treat security like dieting or exercise. Do the important things first and then keep working at it at a manageable pace.
Now go and read it properly, then come back.

The Robert Graham Project says "I think the most important security precaution is to lie to computers compulsively".  This includes made-up user names, multiple email addresses, fake answers to security questions, and multiple mobile phone numbers. If you think the other side is playing fair, read this (technical, I know).

Security questions are broken - more than you would think. Fake answers (that you record safely!) are becoming essential. The first school Robert Graham went to was &*O)IYHPU&G!!!.
Generating a fake identity can be helped with this.

It is worth remembering that Ross Anderson, Professor of Computer Security at Cambridge, does not use online banking because the risks are all with the customer.

Update: you might want to take the Data De-tox to remove any unwelcome public data about yourself.

1.  Re-using passwords.
Checking for data breaches is very sound advice, and not often given. Do this first and regularly.
Data breach is the main threat for most of us. This is why re-using passwords is such a bad thing.
Google, Facebook, and PayPal seem to take our security seriously. If you can use them as a login, that might reduce the risk. Minimising the number of people with your credit card details is prudent.

Don't let your browser store your passwords and fill in forms - that seems to be broken. See also here. If you have a password manager, it might be as well to disable auto-fill for forms (if it will let you).

My limited experience of a password manager (Dashlane) is mixed.  A book, and a password generator set to give you a readable password (passphrase) might be better to start with. NCSC advice is to use passphrases, 4 random dictionary words or CVC-CVC-CVC style passwords, picked for memorability. Advice from Angela Sasse: "A longer password is preferable overall, but that has its own problems,...More than 50% of passwords are now entered on touchscreen devices, and longer passphrases create a significant burden on touchscreen users.
...Passwords are rarely cracked by brute force. They are mostly captured through phishing and malware, and with those attacks it does not matter how long or complex your password is

Unfortunately, password strength meters (things that tell you if your password is weak or strong) are not good indicators of real strength.

Perhaps a Password Manager for all the unimportant sites, and something personal for the vital ones. Using Charles Dickens might (or might not)be helpful. 1Password now checks whether a password has been breached, which is definitely useful. The fact that this is new and novel shows what a way the security industry has to go as regards usability (or utility).

This advice from NCSC is not bad on password re-use.

Mark Burnett‏: "Always remember the three main authentication factors: what can be easily guessed, what can be left in a cab, and what can be chopped off."

Two Factor Authentication (2FA). Yubikey has not got the idea of usability (yet).  This guide  to it was recommended by Zeynep Tufekci. A good idea if you can get the hang of it (not on day one perhaps). Biometrics (including faceID and fingerprintID) look like being more trouble than they are worth, but if long passcodes for a fully-used smartphone are too hard, then they are probably better than nothing. Barton Gellman points out you can make this less painful with an all-numeric PIN: 11 digits or more provide strong security. Big advantage: you get the big-button number pad on the unlock screen. It has to be a truly random number. Your idea of “random” isn’t. [I am not an expert, but I suspect the randomness required and number of digits is a function of how much you are under threat from major adversaries.]

2. Locking phone
How much do you need to use your phone? Maybe it doesn't need to be a repository for your life. Android phones are not a good place to keep secure matters, whereas iPhones can be ok. Maybe you use a cheap feature phone for some / all of the time? Cheap alternative phone numbers sound a worthwhile investment.

SIM hijacking is a real and frightening thing.  My mobile provider (Three) has security questions that you can set up for when they answer the phone. Well worth doing.

4. Adblockers and browsers
Perhaps use Chrome with a particular Google ID for transactions that matter and a different browser for other matters. Your computer may not support two browsers open at once, because they are so resource-hungry. As regards Adblockers, I use Ghostery, happily. Opera has one built in. Chrome is going to get one fairly soon (from Google). Adblockers are worth it. Read Section C from DecentSecurity here.

5. Backups
Using the cloud helps but remember the cloud is short for 'someone else's computer'.  I haven't used Framasoft but it is an alternative to Silicon Valley. Sort out a device to do backups for any data that matter to you. ("you only need to clean the teeth you want to keep" - same thing with data and backups). Freefile sync will give you more capability than you need, is not too hard to learn, and is blisteringly fast. Also available as a Porteableapp. For important items such as photographs, it is worth investing in 'archival Blu-Ray' - an external Blu-Ray read/write drive is not that dear.

6. email accounts
Go and read the advice again.
Email client: You can use webmail for basic purposes just fine, but an email client means you have the emails on your machine (and can back them up). Chaos Intellect is a great client for PCs etc., it can run on a memory stick, and has a great approach to use on phones, It isn't free, but is well worth it - not least for the quality of support. If you are restricted to free, then the Opera email client might be a good choice. If you are into Chat, it can handle that as well.
You will need several email accounts. A Gmail account makes sense, as everyone has one. If your ISP offers email accounts, that is another. Then maybe Yahoo, or Zoho.  If you start to use Zoho seriously, you will need to pay a subscription, but you could do much worse.
@pogue25 recommends using disposable email addresses when you have to give an email address to a site that is bound to spam you.  This generates them.

Update: Ransomware
If you do get stung and locked out of your computer by Ransomware, then it is possible that the keys can be found here.

MS Office really likes to run macros - the major risk from dodgy email attachments. Unless you really really need it, don't install it. Use LibreOffice instead.
If you are going to be using other people's computers (or the ones at the library) to help you learn, then it may be a good idea to put applications on a memory stick. PortableApps is a bit more complicated to use than having applications installed on the computer - but only a bit - It reduces the demands on the computer and allows you to take just a stick with you. You can also install the Opera browser on a memory stick (or on your computer). It is pretty good, and then it would mean that your bookmarks travel with you easily. PortableApps include the Opera browser and email client but since these are the main applications, it might be worth installing them on the USB stick directly.

Photos on social media
The pace of facial recognition on social media is alarming. The Spartacus Hack may well be worth doing. Just put up some misleading pics and labels.

Further reading
There is good advice here and here. The do's and don't's here are for folk at higher risk than many (including break-ups and stalking), but the more you follow it, the safer you'll be. Advanced material here.

Sunday, 17 December 2017

Does Autonomous = Small?

The Clyde Puffers had a crew of 3 and capacity of about 6 TEU

Wage bills have been a factor driving for ever-larger lorries and container ships. The transport companies have successfully externalised the  knock-on costs of ever-larger ports, depots, and warehouses, and the impact on city streets. Removing the wages bill could open the way to a radical reduction in size. The perennial problems of inter-modality could perhaps also be eased. Changing the scale of logistics could open the way to better 'last mile' operations.

Using cargo bikes to replace or complement vans is an example of the scope for changing scale, and thought is being given e.g. here for the need to standardise small containers (no I'm not proposing autonomous cargo bikes for city centres). Such containers will hopefully be compatible with urban mobility platforms on the lines of M.U.L.E (not the US military MULE project).

Thanks to  @thinkdefence there is a discussion of small container standards; see the section on JMIC. These would be great for mobility platforms but are not for cargo bikes.

The huge electric autonomous trucks being investigated in the USA may have a long-haul role there, but perhaps the real market is for something much smaller.

If delivery drones are ever to gain scale, there needs to be standardised landing pads, preferably palletised and compatible with small scale standard containers e.g. biscuit tins

More speculatively, we can envisage a 21st Century replacement for the Clyde Puffer; small Autonomous Ro-Ro vessels (Damen have some starting points), some Mexeflote where local infrastructure is missing, and M.U.L.E like platforms to local depots.

Operations at this more human scale are likely to be more sustainable, and with lower knock-on costs. The trick will be getting the incentives right for it to happen, supported by timely standardisation.

Monday, 27 November 2017

Turning 'Meaningful Human Control' into practical reality

The Fake News

The ambiguity in 'Meaningful Human Control' (MHC) may have been good for generating discussion but it is no good for system design or operation. Rules Of Engagement are bad enough without adding more ambiguity. Some folk seem surprised that 'ethics' needs converting to a technical matter - how else do they think 'ethics' will be implemented at design or run time? The legal viewpoint is not the only one that matters, and expertise in design, support, operation, training, seems thin on the ground to date. This post attempts to make a start on describing the way ahead and practical issues to be faced.

Doug Wise, former Deputy Director, Defense Intelligence Agency “There are human beings that actually fly the MQ-9 drone – people are actually observing and make the decisions to either continue to observe or use whatever is the lethality that is inherent in the platform. There are human beings at every stage. Now lets assume that at some point the human beings release the platform to act on its own recognizance, which is based on the basic information on the payload that it carries and the information that it continues to be updated with. Then it is allowed to behave in a timescale to take data, process it, and make decisions and act on those decisions. As the platforms become more sophisticated, our ability to let it go will become earlier and earlier.” There will be people involved in all stages of the killer robot lifecycle.  The discussion around killer robots, like the discussion around other autonomous platforms, has an unhelpful focus on the built artefact - the robot itself. As UNIDIR has pointed out, a 'system of systems' approach is needed.

The Good News

"What assurances are there that weapon systems developed can be operated and maintained by the people who must use them?" This question, from Guidelines for Assessing Whether Human Factors Were Considered in the Weapon Systems Acquisition Process FPCD-82-5, US GAO, 1981, might be a more useful framing. Assurance requires a combination of inspecting the design, evaluating performance, and auditing processes (for design, operation etc.). Many military systems need something resembling MHC - aircraft cockpits, command centres etc. In fact it is hard to think of a system that doesn't. Not surprisingly, therefore, there is a considerable body of expertise in Human System Integration (HSI) aimed at providing assurance of operability.

Quality In Use (QIU) is defined as:The degree to which a product or system can be used by specific users to meet their needs to achieve specific goals with effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use. ISO 25010 (2011). The term is part of a well-formed body of quality and system engineering standards (civil and military) aimed at providing assurance of QIU. In practical terms, this approach is the way ahead (because it exists). Pre-Contract Award Capability Evaluation is likely to be the a useful tool in helping to build and operate systems with MHC.

The Bad News

The reason most people do not recognize an opportunity when they meet it is because it usually goes around wearing overalls and looking like Hard Work.” Henry Dodd
Reliance on coming up with a good definition of  MHC won't work for the folk at the sharp end of killer robot operation. The test of whether good intentions have translated into good deeds will be after things have gone wrong. There is a need to improve military accident investigation (with some notable exceptions). Unless there is good Dekker-compatible practice for accident investigation of smart systems and weapons, more good folk who put their lives on the line their country are going to be used as fall guys. Mock trials with realistic case material would be a good start - overdue really. Sensible investigation of the 'system of systems' is bound to find shortfalls in numerous aspects of both human and technical design and operation. Looking for clear human/machine responsibilities at the sharp end is no more than scapegoating.

It’s generally hopeless trying to clearly distinguish between automatic, automated and autonomous systems. We use those words to refer to different points along a spectrum of complexity and sophistication of systems. They mean slightly different things, but there aren’t clear dividing lines between them. One person’s “automated” system is another person’s “autonomous” system. I think it is more fruitful to think about which functions are automated/autonomous.” Paul Scharre. The critical parameter for automatic / autonomous is 'context coverage' which considers QIU in both specified contexts of use and in contexts beyond those initially explicitly identified. For autonomous vehicles, it is becoming recognised that the issue is not 'when' but 'where'. A similar situation will continue to apply to smart weapons. The safe and legal operation of smart weapons will remain context-dependent.

'Ordinary' automation is usually done badly, and has not learned the Human Factors lessons proffered since the mid-1960's. There are many unhelpful myths that continue to bring more bad automation into operation e.g. 'allocation of function', 'human error', 'cognitive bias'.  Really, MHC of ordinary automation is far from common.

HSI is practiced to a much more limited degree than it should be, so the pool of expertise is smaller than would be needed. The organisational capability to deliver or operate usable systems is very variable in both industrial and military organisations. Any sizeable switch to 'Centaur' Human-Autonomous Teamwork will hit cultural, organisational, and personnel obstacles on a grand scale.
The current killer robot exceptionalism will be unhelpful if it proves to be a deterrent to applying HSI, or if it continues to be a distraction from the wider problems of remote warfare now we have said Goodbye Uncanny Valley.

Back in the days of rule-based Knowledge Based Systems, the craft of the Knowledge Engineer involved spending 10% of the time devising an appropriate knowledge representation and 90% of the time trying to convince engineers that the human decision making approach was not flawed but contained subtleties that allowed adaptation to context, and that the proposed machine reasoning was seriously flawed. With the current fashion of GPU-powered Machine Learning (ML), this may not be possible. Further, XAI (explainable AI) is a long way from a proven remedy for the opaque nature of ML  ML can be brittle and fail in unexpected ways; The claim that the X part of the system will be able to generate an explanation under this circumstance is an extraordinary claim without extraordinary evidence.

Friday, 13 October 2017

Walkable urbanism vs. the Robocar

A developed country is not a place where the poor have cars. It's where the rich use public transportation.” - Gustavo Petro
"Planning for the automobile city focuses on saving time. Planning for the accessible city focuses on time well spent." - Robert Cervero
‏ "In the walkable city, people gather in a piazza, plaza, or square. In the automobile city, they're called...intersections." - Taras Grescoe
 Motocracy (noun, plural-cies) “Government by the motorists; a form of self-governance in which authority/powers of agency is vested in individual motorists and exercised directly by them or by their co-drivers/riders in order to uphold law and liberty on the road.”
"This bill is one of the biggest assaults on 1966 federal safety act that’s ever occurred."- Former NHTSA chief @JoanClaybrook on the AV bill.


For a technology without an obvious customer or regulator pull, robocars are seen as big business. Because of the lack of pull, this is not a sure thing, and indeed we may be seriously past 'peak car'. The temptation to Volkswagenize (cheat) may be  irresistible to the motor industry/ SV combo driving the robocar narrative. The cheat will be to control the environment to make it easier for robocars to operate. The controls on streets and pavements will make towns and cities much less friendly to humans. The controls will be sold as a 'moral imperative' to reduce deaths. Such claims lack any convincing evidence.

Robocars as autogamous technology

 "Autogamous technology; self-pollinating and self-fertilizing, responding more and more to an inner logic of development than the needs and desires of the user community". Gene I Rochlin
Robocars are mostly about money, not technology; keep the share price up in the face of Google and Tesla. "If the driverless economy is imminent, and the endgame is fleets of fully utilized robot vehicles that create radical reductions in personal vehicle ownership, why would a car company be complicit in undermining its own market? The answer is that it wouldn’t. No car company actually expects the futuristic, crash-free utopia of streets packed with Level 5 driverless vehicles to trans­pire anytime soon, nor for decades. But they do want to be taken seriously by Wall Street as well as stir up the imaginations of a public increasingly disinterested in driving. And in the meantime, they hope to sell lots of vehicles with the latest sophisticated driver-assistance technology."
Pew research has shown the lack of customer pull for robocars: "In the case of driverless vehicles, 75% of the public anticipates that this development will help the elderly and disabled live more independent lives. But a slightly larger share (81%) expects that many people who drive for a living will suffer job losses as a result. And although a plurality (39%) expects that the number of people killed or injured in traffic accidents will decrease if driverless vehicles become widespread, another 30% thinks that autonomous vehicles will make the roads less safe for humans...Nearly six-in-ten Americans say they would not want to ride in a driverless vehicle ."
MIT research has found that people don't really want robocars: "The 2017 data suggest a proportional shift away from comfort with full automation. Across all age ranges, a lower proportion of respondents were interested in full automation when compared to 2016. This trend was particularly notable for younger adults aged 16-44. A higher proportion of respondents indicated comfort with systems that actively help the driver, without requiring the driver to relinquish control." Follow the money
Big motor has its eyes on some high value income: "The worldwide auto industry took in $2.3 trillion in revenue in 2016, but revenues associated with mobility services—a term the covers everything from Uber to traditional taxis and buses—totaled $5.4 trillion."

GM has said the autonomous vehicle and mobility business could be a potential $7 trillion global market.
The "Passenger Economy" is likewise reported to be a $7 trillion market  " A recent study conducted by Strategy Analytics for Intel estimates that the "Passenger Economy" created by the advent of autonomous vehicles will swell from $800 billion in 2035 to a whopping to $7 trillion by 2050, driven by services such as robo-taxis, automated delivery of everything from pizzas to prescription drugs, and captive marketing to idle car occupants."
  There is a 'billion dollar war on maps'  where the emphasis on robocars may be to our collective detriment.

Options for the way ahead

Consider two competing narratives for the future of urban mobility.
1. Networked urbanism (see) where cities are driven by big data analytics and networks controlled in part by machines. The 'smart city' as technological solutionism, with everything connected, automated, and lots of big data. You might expect the car makers to be happy with this as a future, but the bad news is,  even here, car ownership and use may fall. Ouellette on the reinvention of urban space: "If, for example, your existing urban space reality is Rob Fordian—one where cars rule while pedestrians and cyclists serve—then that model is about to be turned on its head. Car culture as the macro force of cities is on the way out. Waiting in the wings are an ever-increasing number of smart, digital technologies working synergistically to make the auto-centric urban model obsolete."Networked urbanism can be dressed up as faster, smarter, greener, but it is still pushing the corporate panopticon into our streets and lives. Big business likes AVs but needs to make long-busted claims about V2V to assemble a case.
The life in such a world sounds like that of the 'insiders' in A Very Private Life by Michael Frayne. A life tended by the kindness of corporate automata.
2.On the smart citizen side of the street, there is walkable urbanism - the "Life Sized City". This is gaining in popularity round the world. Paris for example “journee sans voiture” . "The car-free day fits within a comprehensive strategy to improve mobility while reducing motorized traffic. Hidalgo and her predecessor, Bertrand Delanoe, have enacted bold policies to prioritize transit, bicycling, and walking on city streets, resulting in a 30 percent drop in traffic over 10 years." Change is coming to the streets of Motown - alternatives to cars are going to be right in the face of the good ol' boys, and Copenhagenize Design Co has designed the bike infra network for City of Detroit.  Cities are starting to end the dominance of the traditional car, and word of the success of Copenhagen and the Netherlands is spreading. Resources for walkable urbanism are being supplemented by resources for cyclable urbanism e.g. Velotopia.The real disruption is the bicycle not the robocar.
The benefits of urban bike infrastructure are being recognised for business here  for traffic flow here, and for health here and here. A summary of ten reasons for reducing car dependency is here. Progress down this route is non-linear: "Getting from 0 to 5% bike mode share is really hard. Getting from 5 to 15% is a piece of cake." - @copenhagenize. "There are 3 million pedelec bikes in use in Germany. 3.7% of population. Adoption about to enter hockey stick...3.3 million Ebike units will sell in 2023, Europe. (2 million in 2016, 100k in 2006).." Horace Dediu‏ @asymco.
So far, progress has been largely out of the public eye; reaching 2 million EVs met with huge publicity, but 200 million eBikes in China alone is invisible. In India,  "Today, India has over 25 million four-wheeled cars, jeeps and trucks registered to private owners, escalating by about 2 million new vehicles every year. The same data registry of 2013 by the Ministry of Road Transport also recorded more than 130 million two-wheelers plying on Indian roads. A staggering number by any measure, and greater than the number of four-wheelers by a factor of five."
 Dockless bike hire has real potential, and big money behind it.  "For all the talk of autonomous cars transforming cities it’s entirely possible that another high-tech form of transport – free-floating rental bicycles – could get there first. ...  In China, a dockless bike-share boom is reducing car use in cities and even leading to forecasts that less fossil fuel will be burned in the future. "
Walkable, cyclable urbanism might look unstoppable, but its threat to the motor industry and the big data corporates is likely to bring a response.

People are messy, and difficult for robocars to deal with

"The randomness of the environment such as children or wildlife cannot be dealt with by today’s technology" - Markus Rothoff, Director of Autonomous Driving, Volvo
Apart from Volvo's trouble with kangaroos, there are many aspects of robocar / people interaction that are difficult, see here. Robocars need to interact with e.g. pedestrians. This is difficult, expensive, and culturally alien to the nerds building the cars (Cefkin at Nissan is a rare anthropologist in the business).  In robocarland, nobody can hear you scream: It’s No Use Honking. The Robot at the Wheel Can’t Hear You "If the cars drive in a way that’s really distinct from the way that every other motorist on the road is driving, there will be in the worst case accidents and in the best case frustration," he said. "What that’s going to lead to is a lower likelihood that the public is going to accept the technology."
The cheat is: Just get rid of the people around cars, so you don't need to solve these problems. 

The cheat is coming - they are after our infrastructure

"If you doubt self-driving cars are coming, you haven’t paid attention to the rate of human ingenuity and technological progress. Conversely, if you believe more than 1% of the statements coming out of Detroit, Germany, Japan and Silicon Valley about when they’re getting here, you’re as deluded as their investors. The question isn’t when, it’s how and where." Alex Roy
"There are fourteen major car companies in the world. No one believes they can all survive, and Morgan Stanley believes only five or six will. The big ones are hedged against any delay in the adoption of self-driving cars." Alex Roy

An example of the 'moral imperative' being used to destroy walkable urbanism (and much more) is here. The slippery slope starts with 'modest changes' of course. "In summary, safe autonomous cars will require modest infrastructure changes, designs that make them easily recognized and predictable, and that pedestrians and human drivers understand how computer driven cars behave." All for benefits that are vapourware.
There are reports of dedicated infrastructure already. "In the new report, the group says this transformation will occur in three stages. First, AVs will be allowed to share HOV lanes. The study’s authors say that this phase could be implemented today and note that California law already allows self-driving cars to use carpool lanes. Step two would involve creating a lane dedicated to AVs. Step three: converting all I-5 lanes to be used exclusively by self-driving cars."
The vision of a people-free dedicated robocar environment is being set out  "For example, when all riders are focused inward and the driving is handled by a sensor network, indicators like road signs, brake lights, and lane separators become unnecessary. If there are no human drivers, we won’t have a need for these visual guides... With awareness of approaching vehicles and traffic, intersection traffic lights become less necessary. Night sensor driving reduces the need for streetlights on highways. Road signs and lanes disappear, with roadway intelligence built into vehicles. Highway lanes expand and contract automatically for high-traffic times. Autonomous-only highways allow for much higher rates of speed.."
Completely unfounded expectations of AV performance and safety being used to influence infrastructure. For example   "Currently the average safe driver leaves ‘one car length per 10 miles per hour’ between vehicles (at least they have been taught to do so) but the automated (and autonomous) systems can react much faster than humans and can therefore safely travel much closer together. As a larger and larger percentage of the vehicle fleet becomes capable of safe travel in less space, the real capacity of the roadway increases. As the demand for highway infrastructure is predicated on the safe traveling distance under human control and traffic and revenue predictions are based on these assumptions, highway capacity manual assumptions will be increasingly inadequate as autonomous features are introduced. " This article combines unfounded claims with moral blackmail "Every day that goes by without driverless cars, people die. The truth is that humans are bad drivers, and driverless cars are safer. To ensure that we reach mass adoption as soon as possible, we need to sort out these issues of trust," Since we don't have driverless cars yet (or even safety requirements for them) this claim is unfounded and is being used to sell big business and technology. Also, he hasn't got the Alex Roy message on 'trolley problem' nonsense, saying "In other words, manufacturers must choose whether to make morally utilitarian cars, or preferentially self-protective ones."

The pavements / sidewalks will not be free, either.

Do watch this video testimony about a delivery robot on a railway platform.
This Guardian article is good on delivery robots:“If there really were hundreds of little robots,” Ehrenfeucht said, “they would stop functioning as sidewalks and start functioning more as bike lanes. They would stop being spaces that are available for playing games or sitting down.” Ehrenfeucht pointed out that 130 years ago, streets were not yet divided into lanes for traffic, parked cars, pedestrians and bikes, and that the introduction of robots to the streetscape might require a reimagining of the available space, possibly with a designated lane for robots....Sidewalks are often a hotly disputed space, and conflicts are bound to arise as new uses are proposed. Many cities across the US have adopted sit/lie ordinances, which criminalize resting or sleeping on the sidewalk and are generally considered to be targeted specifically at homeless people. At the same time, urbanists have tried to promote new uses of sidewalk space with features like “parklets”. ..“We really see this as a privatization of the public right of way,” said Nicole Ferrara, executive director of pedestrian advocacy group Walk San Francisco, who wants to ban robots from the sidewalk. Ferrara argued that walking has social, health and economic benefits, while robots could pose a hazard to senior citizens and people with disabilities....“We’re not excited about the idea of engineering walking out of our lives,” she said. “People live in urban centers not because they want to sit at home in their house and have their toothbrush delivered to their door, but because they have a pharmacy around the corner that they can walk to.”
A welcome (and rare) sight was an article pointing out the risks of robocars...."there has been very little public discussion of whether selfdriving vehicles will coexist or collide with long-standing principles of accountability, transparency, and consumer protection that collectively constitute the Personal Responsibility System."
Further reading on the moves underway to rid the streets of people are here and here. A tinfoil hat may be required, but the arguments are highly plausible.


 Robocars are part of the tech utopia nobody wants, but there is money and momentum behind them. The solutionism is at work co-opting good causes to make robocars critical to their lives.
If we want walkable urbanism (and we should), we will have to make a stand.

Monday, 18 September 2017

Safety requirements for Autonomous Vehicles

The voices advocating a transition to Self-Driving Vehicles / Autonomous Vehicles / robocars claim they will eliminate '1 million deaths per year'. I have been told there is a 'moral imperative' to use AI for driving because of this. However, as pointed out by @SafeSelfDrive on Twitter, robocars are not a response to user pull or a safety initiative. Robocar started at google, and the motivation for the initiative is somewhat unclear in this interview with Chris Urmson. I am reliably informed that everyone else is just reacting to Google. All in all, there was not an obvious case for this massive investment, despite the crowd now shouting for ending 1million deaths.

Nick Reed of TRL has an interesting piece on robocar safety, pointing out the difficulties of proof by testing. (Of course, test is only part of a safety critical system life cycle).  He tells us that in the UK there are 180 million miles between fatal accidents. Vehicles in UK do about 324 billion miles a year (see here). People say they are unhappy with the current driving death toll, so what would be a better number? The EU has a strategic target of halving road fatalities, so let's use that i.e. fatality every 360 million miles. People distinguish voluntary risk (driving) from involuntary risk (being transported) by a factor of 1000, so the target for robocars is a fatality every 360 billion miles i.e. a bit less than one a year in UK. My uninformed guess is that this is the right order of magnitude.
A comparison with rail might help. People now travel about 40 billion miles by rail in the UK (a big increase over recent years). There has been 1 passenger fatality since 2006. Some crude arithmetic: 1 fatality per 10 years, and 40 billion miles p.a. gives us a fatality every 400 billion miles, which isn't so far off the robocar target.

In 2014, there were 315 fatalities on the rail network, 89% of which were suicides. It is important that the boundaries for robocar fatalities are set and monitored appropriately. John Adams has pointed out that, while car occupant fatalities have decreased, pedestrian and cyclist fatalities have increased.

Chris Urmson has this to say about safety criteria:
"But when we think about the rate at which bad things happen, they’re very low. So you know in America, somebody dies in a car accident about 1.15 times per 100 million miles. That’s like 10,000 years of an average person’s driving. So, let’s say the technology is pretty good but not that good. You know, someone dies once every 50 million miles. We’re going to have twice as many accidents and fatalities on the roads on average, but for any one individual they could go a lifetime, many lifetimes before they ever see that. So that experience with the technology and kind of becoming falsely comfortable with the safety of it is one of the challenges they face."
Talking about doubling the accident rate is rather different to the breathless hype from the million deaths a year crowd.
In a dazzling piece about driving in India, Alex Roy says:
"Because in the absence of a technical or regulatory definition of “safety”, manufacturers—who have invested billions in self-driving—will be forced to decide what level of self-driving is safe enough to bring to market, and market it.
The mobility industry and clickbait media supporting it are almost totally invested in the concept of the Zero Day, the day when self-driving cars reach a mystical tipping point and “take over the world,” which I also refer to as the Autonomotive Singularity. The truth is that their utopian, winner-takes-all narrative is no more than a velveteen vision of good intentions guided (and blinded) by ham-fisted profit
The idea of manufacturers setting their own safety criteria based on marketing does not appeal to me one bit.

The right approach in the UK is, of course, an ALARP safety case with a good understanding of 'grossly disproportionate costs', supported by use of appropriate standards. A decent profile audit against Automotive SPiCE would help.

Tuesday, 6 June 2017

Urban mobility - harmonising platforms and infrastructure

Y’know, watching government regulators trying to keep up with the world is my favorite sport.”
Neil Stephenson, Snow Crash, 1992.
Technology metals and new materials offer the promise of a 'Cambrian explosion' in forms of urban mobility. Too many examples to list, but see the options at the end of this , or this. If we can co-develop infrastructure and mobility platforms in a functional way, then we may achieve remarkable levels of Quality In Use [1]. I have been unable to find any signs of work towards this aim, so this post has been written in haste  as a call for someone to point me in the right direction. It must be happening, surely?
Decent bicycle infrastructure was achieved in Denmark and the Netherlands only with a struggle; still to happen in the UK by and large.  Innovative approaches to bicycle infrastructure seem the right place to start, e.g. by expanding this, this and this.
The UK history of regulating innovative platforms is pretty patchy e.g. this or this on Segways and hoverboards, and this or this on microcars. Ebikes and pedelecs already seem a bit of a regulatory mess e.g. see this, this, this or this. Note also that speed is an important determinant of Quality In Use, and current standards may not be right, as discussed by Copenhagenize here.
For Système Panhard vehicles and their 20th Century derivatives, the Silicon Valley obsession with technology may not be a cost-effective approach (there are folk who claim a moral imperative to use AI to reduce accident rates - such folk are dangerous). Simple speed limiters might be better (though less popular).
Much of the current regulation seems arbitrary and appear to be based on (unstated) assumptions that are (or will become) very questionable, and use their own specialist language (invalid carriages, pedelecs etc.). They don't seem exoskeleton-ready. Modern platforms offer the potential to meet multiple regulatory categories at the press of a button or automatically. New types of platform need appropriate places in the infrastructure. At the time of writing, San Fransisco is considering a ban on delivery robots on the pavement (sidewalk there). Functional regulation is required to spare us from inappropriate regulations such as the urban myth of London taxis needing a bale of hay in the boot for the horse. This project between MIT,  the National University of Singapore, and the Singapore-MIT Alliance for Research and Technology (SMART) is worth a look. They converted a mobility scooter to operate autonomously. In two months.


Walkable urbanism


  • Accept that an integrated approach to platforms and infrastructure is the best route to safety, low cost access, and innovation. There is currently some very limited acceptance of multi-mode platforms for both invalid carriages and pedelecs/e-bikes, but way short of what is desirable. The potential for innovation may be best implemented with a major extension of multi-mode platforms operating according to the lane they are in.
  • Human Centred Design [2], prototyping inc. VR, AR, and consultation (Holmston Rd, Ayr I'm looking at you). Accept that designing for difficult use cases (disability, elderly etc.) benefits everyone else, and benefits difficult use cases by reducing costs. This code of practice may help (I have not examined it yet).
  • Regulatory capture [3] (e.g. Uber in London) to be treated with extreme prejudice. The public highway is to remain a public commons, and not to be privatised. Data and algorithms relating to the safe use of roads and pavements ditto.
  • Functional allocation of streets, and speed related lanes should lead to new opportunities for platforms with light regulation of design and operation for low speed platforms.
  • Accept that the US is an outlier in terms of urban design and public transport provision, and 'solutions' from the US should be treated with considerable caution, including their fascination with putting electric propulsion and advanced computing in 20th Century cars.
The approach to design implementation would seem to start with functional street categories, giving combinations of lanes. The potential use of new platforms needs to be aligned with an evolution of types of lane. A suggested arrangement is as follows:

Lane 1 - Pavement updated (UK pavement = US sidewalk); design speed 4 mph

Pedestrians, unpowered prams, buggies, trollies, carts, wheelchairs etc. Platforms up to two feet wide (legged, wheeled, hover - whatever) with a mode that limits speed (no licence, lights, horn etc required, but enough visibility and audible warning of approach, minimal regulation and liability), inc. autonomous PLatforms with or without people. Platforms without people need to behave appropriately e.g. for blind pedestrians. Platforms that can also operate in other lanes are fine here when in the Lane 1 mode.

Lane 2 - Cycle lane updated; design speed 10 mph

This is an average urban cycling speed and doesn't surprise other people. Having lanes 1 and 2 next to each other with just visible distinction seems to offer the most flexibility. Platforms with and without human assistance, with and without people. Platforms with no people will need some sort of official safety approval.

Lane 3 - Urban street updated; design speed 20 mph

If the functional design of the street has this as the upper speed limit, then lanes 1, 2, and 3  can be combined with no separation (cf. this), but platforms that travel at speeds higher than Lane 2 speed will need proper Type Approval, licencing etc.Lane 3 only platforms can be two people wide.

Lane 4 and above

Platforms capable of appropriate minimum speed, with suitable visibility, audible warning, protection.

Question - Folk must be working on harmonising platforms and infrastructure. Who is?


[1] Quality In Use is defined as:The degree to which a product or system can be used by specific users to meet their needs to achieve specific goals with effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use. ISO 25010 (2011)
[2] Principles of Human-Centred Design:
  • A clear and explicit understanding of users, tasks and environments
  • The involvement of users throughout design and development
  • Iteration
  • Designing for the user experience
  • User centred evaluation
  • Multi- disciplinary skills and perspectives
[3]  "When you try to regulate markets the first thing to get bought and sold are the regulators"  P.J. O'Rourke

Friday, 2 June 2017

Some reflections on 'Sully'

The movie 'Sully' received some criticism for taking artistic licence with the NTSB inquiry, and the NTSB complained about how they were portrayed. Additionally, some of the cockpit actions were criticised as incorrect - not according to procedure. This note rebuts such complaints and criticisms.
When I hear 'failed to follow procedures' this is the picture that comes to mind.

It is as important to examine appropriateness of the procedures as it is to examine the appropriateness of the crew behaviour.
There were no procedures for the situation they faced (See Airbus report here), so criticism of flap selection seems more than niggardly hindsight. This article has the following:
"The NTSB recommended changing the location of the rafts to ensure capacity for all passengers, since it's unlikely the rear rafts would be available. The FAA rejected that, saying that if Sullenberger had followed Airbus' directions on descent speeds for ditching, the rear rafts would have been usable. The NTSB said the ability of pilots to achieve those descent speeds has never been tested and can't be relied on. "
There are also questions as to the extent the investigation recommendations have been acted on.
As regards the NTSB moaning about being seen as adversarial, this from the scriptwriter has the ring of truth to it.
The key was, I had to do three layers of research," he says. "One was everything about the NTSB investigation, two was Sully's book...but then really the third level was memorizing Sully and Sully's willingness to share the stuff that he had not shared before - what he went through that was behind the scenes, that's was the wrenching and crushing investigation, the attempt, not out of ill will, but the honest attempt to try and find something that would affix blame. That’s really what they were looking for. You know, you look at 99 percent of these cases, the investigation, it always says at end, ‘pilot error.’ That’s the expectation even if someone is not going to speak that that's somewhere in the bloodstream of the investigation - pilot error. There was no pilot error to find. But it didn’t keep them from looking.”
Recall the press release for the incident report on Flight 447; it put 'human error' on the front pages of newspapers round the world (or at best 'pilot and technical error'). If you read this compelling analysis of the incident, a different picture emerges:
  • Two co-pilots flying rather than pilot/co-pilot, with #3 pilot as Flying Pilot.
  • The Air Data System froze (a known problem). Type Approval for Air Data Systems had not changed since the days of propellor aircraft flying at half the height and half the speed. This caused the Flight Computer to go into some sort of emergency mode.
  • None of this had been in the training and simulation for the pilots.
  • The Flying Pilot held the joystick right back; the other pilot would not have been aware of this, since the joysticks weren't coupled.
  • The hindsight interpretation of the stall warning appears to be controversial. It would appear that the manufacturer was keen to state that the situation facing the pilots was straightforward (i.e. human error) "The situation was not ambiguous and the stall was obvious,". The BEA investigators did not think matters were so straightforward, see here and here. Not surprisingly, there were 66 pages of discussion at PPRuNe
A remarkable incident of pilots vs. automation, where the pilots survived to tell their side of the story, can be found here.
To quote Sidney Deker 'Human error is a symptom of trouble deeper inside a system' - a consequence not a cause of accidents.