Tag Archives: Digital

The Tesla Crash

It’s happened. A person has died in a an accident involving a driverless car, raising difficult questions about what it means to regulate autonomous vehicles, to negotiate control and responsibility with software that may one day be very good, but currently is not.

In tracing an ethnography of error in driverless cars, I’m particularly interested in how error happens, is recorded, understood, regulated and then used as feedback in further development and learning. So the news of any and every crash or accident becomes a valuable moment to document.

What we know from various news reports is that 40 year old Joshua Brown was, supposedly, watching a Harry Potter DVD while test-driving his Tesla with the autopilot mode enabled, when the car slammed into the under-side of a very large trailer truck. Apparently the sensors on the car could not distinguish between the bright sky, and the white of the trailer truck. The top of the car was sheared off as it went fast under the carriage of the trailer and debris was scattered far.

Here’s an excerpt from a Reuters report of the crash from the perspective of a family whose property parts of the car landed in:

“Van Kavelaar said the car that came to rest in his yard next to a sycamore tree looked like a metal sardine can whose lid had been rolled back with a key. After the collision, he said, the car ran off the road, broke through a wire fence guarding a county pond and then through another fence onto Van Kavelaar’s land, threaded itself between two trees, hit and broke a wooden utility pole, crossed his driveway and stopped in his large front yard where his three daughters used to practice softball. They were at a game that day and now won’t go in the yard. His wife, Chrissy VanKavelaar, said they continue to find parts of the car in their yard eight weeks after the crash. “Every time it rains or we mow we find another piece of that car,” she said.”

People in the vicinity of a crash get drawn into without seeming to have a choice in the matter. Their perspective provides all kinds of interesting details and parallel narratives.

Joshua Brown was a Tesla enthusiast and had signed up to be a test driver. This meant he knew he was testing software; it wasn’t ready for the market yet. From Tesla’s perspective, what seems to count is how many millions of miles their car logged before an accident occurred, which may have not been the best way to lead with a report on Brown’s death.

Key for engineers is perhaps the functioning of the sensors that could not distinguish between a bright sky and a bright, white trailer. Possibly, the code analysing the sensor data hasn’t been trained well enough to make the distinction between the bright sky and a bright, white trailer. Interestingly, this is the sort of error a human being wouldn’t make; just as we know that humans can distinguish between a Labrador and a Dalmatian but computer programs are only just learning how to. Clearly, miles to go ….

A key detail in this case is about the nature of auto-pilot and what it means to engage this mode. Tesla clearly states that its auto pilot mode means that a driver is still in control and responsible for the vehicle:

“[Auto pilot] is an assist feature that requires you to keep your hands on the steering wheel at all times,” … you need to maintain control and responsibility for your vehicle while using it. Additionally, every time that auto pilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.”

What Tesla is saying is that they’re not ready to hand over any responsibility or autonomy to machines. The human is still very much ‘in the loop’ with autopilot.

I suspect the law will need to wrangle over what auto pilot means in the context of road transport and cars as opposed to auto pilot in aviation; this history has been traced bt Madeleine Elish and Tim Hwang. They write that they “observe a counter intuitive focus on human responsibility even while human action is increasingly replaced by automation.”

There is a historical tendency to ‘praise the machine and punish the human’ for accidents and errors. Their recommendation is for a reframing of the question of accountability to increase the web of actors and their agencies rather than just vehicle and driver. They “propose that the debate around liability and autonomous systems be reframed more precisely to reflect the agentive role of designers and engineers and the new and unique kinds of human action attendant to autonomous systems. The advent of commercially available autonomous vehicles, like the driverless car, presents an opportunity to reconfigure regimes of liability that reflect realities of informational asymmetry between designers and consumers.”

This is so important and yet I find it difficult to see how, even as a speculative exercise, how you’d get Elon Musk to acknowledge his own role or that of his engineers and developers in accountability mechanisms. It will be interesting to watch how this plays out in the American legal system, because eventually there are going to have to be laws that acknowledge shared responsibility between humans and machines, just as robots needs to be regulated differently from humans.

The Problem with Trolleys at re:publica

I gave my first talk about ethics and driverless cars for a non-specialist audience at re:publica 2016. In this I look at the problem with the Trolley Problem, the thought experiment being used to train machine learning algorithms in driverless cars. Here, I focus on the problem that logic-based notions of ethics has transformed into an engineering problem; and suggest that this ethics-as-engineering approach is what will allow for American law and insurance companies to assign blame and responsibility in the inevitable case of accidents. There is also the tension that machines are assumed to be correct, except when they aren’t, and that this sits in a difficult history of ‘praising machines’ and ‘punishing humans’ for accidents and errors. I end by talking about questions of accountability that look beyond algorithms and software themselves to the sites of production of algorithms themselves.

Here’s the full talk.

Works cited in this talk:

1. Judith Jarvis Thompson’s 1985 paper in the Yale Law Journal,The Trolley Problem
2. Patrick Lin’s work on ethics and driverless cars. Also relevant is the work of his doctoral students at UPenn looking at applications of Blaise Pascal’s work to the “Lin Problem”
3. Madeleine Elish and Tim Hwang’s paper ‘Praise the machine! Punish the human!’ as part of the Intelligence & Autonomy group at Data & Society
4. Madeleine Elish’s paper on ‘moral crumple zones’; there’s a good talk and discussion with her on the website of the proceedings of the WeRobot 2016 event at Miami Law School.
5. Langdon Winner’s ‘Do Artifacts Have Politics’
6. Bruno Latour’s Actor Network Theory.

Experience E

How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:

“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”

Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?

Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell[1]:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.

This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)

[1] http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/?utm_source=medium&utm_medium=post&utm_content=chapterlink&utm_campaign=republish

Hot Flash

A dwarf called Warren runs the Internet of Things facility and I am in love with him. You can never really rationally explain why you love someone, you just do. Warren is in trouble with his refrigerator.  The refrigerator started messaging HOMELYNX about how the cucumber supply was going down faster than usual. For one thing, there shouldn’t even be cucumbers in the refrigerator, and while the most recent supply could be rationalised by the tubs of hummus, labneh and borani – guests – it was still going down very fast. Had anything else reported something irregular about the cucumbers?  It turns out that the waste disposal unit could verify that cucumber peels had been identified and the toilet could detect traces of it; so we know they hadn’t been thrown out of the window at an unsuspecting passerby. That would have been funny, actually, especially if there was such a thing as a window or a passerby around here. No, all you have here is the hum and rinse of electricity through your hair.

The thing is, Warren doesn’t even eat cucumbers, they were left over from the crudite plate at the farewell party for the Chief. Not wanting to waste them, and knowing I love cucumbers, Warren just put the extras in the fridge. Some things are perfectly rational and explain-able but the problem with rationality is that everyone has their own version of it.

Warren maintains a section of the main server farm, MEM046Z where the Internet of Things is made, and he isn’t supposed to fall in love. He certainly isn’t supposed to fall in love with someone he met online who can only stand to eat cucumbers and yoghurt all summer and thinks she is a Timurid’s Wife. The Internet of Things is a high security facility and no one is allowed to enter except authorised personnel and certainly not any Central Asian types – real or imaginary.

The irony doesn’t escape us that it all started with the very same tattling refrigerator having a Twitter exchange with @thetimuridswife. I also love melons and ice-cream and the refrigerator was telling me about the history of ice-cream making, and kulfis in particular, long before modern refrigeration.  (Kulfi has been appropriated by the Indians but it actually came from Central Asia.) If you pulled up the logs you’d see Twitter exchanges about flavours and their pairings, tweets that made sense to no one else but the two of us. It started with the refrigerator tweeting ‘beetroots & mustard’. Then, I tweeted

@thetimuridswife parmesan and chocolate

hesitantly, and waited to see what would happen. And then it came:

@coolhuntings23 blue cheese and pear

@thetimuridswife chocolate and onions

@coolhuntings23 green beans and oranges

There are no secrets with a dwarf. The dwarf had hacked into the refrigerator’s Twitter ID and was tweeting as it, without the refrigerator realising it had been compromised. It had always been him, and me;  the refrigerator was just a.. Trojan horse.

Over a series of Twitter exchanges I told Warren all about my travels and reincarnation. I am a Timurid’s wife and the fleshy concubine to a Sassanid warlord in ancient Samarqand, “a city so steeped in poetry that even medical doctors wrote their treatises in verse.” As a result I am something of a secret agent with very high levels of security clearance. Uzbek, in those days, far outstripped Persian as a language. Persian had one word for crying; Uzbek had over a hundred. Crying like a baby hiccuping, crying as if you have lost your keys, crying as if your parents have died, crying over beautiful poetry, crying for the way you used to love someone and don’t anymore. Samarqand was so far advanced in the sciences, art, architecture, medicine, astronomy, poetics… . Warren thinks that sometimes I’m doing other people’s share of make believe as well.

He lied about there being another person in the house eating cucumbers. He said he had changed his diet but it turns out the feeds from the heat sensors revealed a second person in the house. Once they all started pooling all their data and looking at everything that wasn’t Warren, they found me.  I couldn’t help it, I’m menopausal, and all that seems to keep me cool is a diet of cucumbers and yoghurt. (Dill and garlic in the mix never hurt)

It wasn’t easy to hide from a house; it’s like being 12 again and all the girls are whispering about you behind your back and you absolutely know they are but can’t seem to get even the smallest piece of information from anyone about it or make them stop. It is like the time your best friend found and read your secret diary.

Warren said we should just continue as normal – quietly, he going about his work and me reading, studying and writing. In the evenings we would eat and cheat at cards and giggle over other people’s data streams. It was only a matter of time before they came for us. Till then he told me to play with his hair and tell him about the siege on Samarqand.