Tag Archives: self driving car

“Imagining Ethics”: Testing out SF as a method

Back in early April on a trip to the Theorizing the Web conference in New York, two artists-in-residence at New Inc, Stephanie Dinkins and Francis Tseng, invited me to test out something at their monthly “AI Assembly”.

I believe that it is difficult for everyday users to understand and make sense of digital technologies; and specialists like computer scientists or lawyers can be restricted by their disciplinary training in being able to see the ways in which technology and society interact.

Yet, I think we have to inspire a wider conversation about digital literacies given the present and future of ubiquitous computing and artificial intelligence. From data breaches, to the complex decision-making expected of machine learning, what are the ways in which people may conceptualise values and norms for regulating human-machine relationships in a near future?

There are a number of methods to map out the social-political-economic dimensions of future scenarios, and they’re commonly used across different fields in the automotive industry. (Mathematical modeling for predicting crashes has in fact been around since the 1980s). I’ve also been thinking about using SF (speculative fiction, science fiction, speculative feminism, science fabulation, string figuring: Donna Haraway expands SF beyond ‘science fiction’) as a way of telling stories about power, society,and technology.

Inspired by these, I’m curious about the imagination, and the role that imaginaries play in shaping and articulating how people think about a near future with machines and technology. I believe that ‘socio-technical imaginaries’, a concept developed by Sheila Jasanoff and Sang-Hyung Kim, underlie and shape the development of technologies in society, may be an interesting theoretical framework to adopt and adapt. I’m trying to find a way to bring these elements together, and the New Inc experiment is part of that.

Here’s more about all this on the Cyborgology blog here.

Machine Research @ Transmediale

The results of the Machine Research workshop from back in October were launched at Transmediale: the zine, and a studio talk.

During the workshop, we explored the use of various writing machines and ways in which research has become machine-like. The workshop questioned how research is bound to the reputation economy and profiteering of publishing companies, who charge large amounts of money to release texts under restrictive conditions. Using Free, Libre, and Open Source collaboration tools, Machine Research participants experimented with collective notetaking, transforming their contributions through machine authoring scripts and a publishing tool developed by Sarah Garcin. (The image accompanying this post is a shot of the PJ, or Publication Jockey, with some text it laid out on a screen in the back). The print publication, or ‘zine, was launched at transmediale is one result of this process. You can read the zine online

The studio talk brought together one half of our research group that talked about’infrastructures’. Listen to it here: (I’m speaking at 44:09)

The Tesla Crash

It’s happened. A person has died in a an accident involving a driverless car, raising difficult questions about what it means to regulate autonomous vehicles, to negotiate control and responsibility with software that may one day be very good, but currently is not.

In tracing an ethnography of error in driverless cars, I’m particularly interested in how error happens, is recorded, understood, regulated and then used as feedback in further development and learning. So the news of any and every crash or accident becomes a valuable moment to document.

What we know from various news reports is that 40 year old Joshua Brown was, supposedly, watching a Harry Potter DVD while test-driving his Tesla with the autopilot mode enabled, when the car slammed into the under-side of a very large trailer truck. Apparently the sensors on the car could not distinguish between the bright sky, and the white of the trailer truck. The top of the car was sheared off as it went fast under the carriage of the trailer and debris was scattered far.

Here’s an excerpt from a Reuters report of the crash from the perspective of a family whose property parts of the car landed in:

“Van Kavelaar said the car that came to rest in his yard next to a sycamore tree looked like a metal sardine can whose lid had been rolled back with a key. After the collision, he said, the car ran off the road, broke through a wire fence guarding a county pond and then through another fence onto Van Kavelaar’s land, threaded itself between two trees, hit and broke a wooden utility pole, crossed his driveway and stopped in his large front yard where his three daughters used to practice softball. They were at a game that day and now won’t go in the yard. His wife, Chrissy VanKavelaar, said they continue to find parts of the car in their yard eight weeks after the crash. “Every time it rains or we mow we find another piece of that car,” she said.”

People in the vicinity of a crash get drawn into without seeming to have a choice in the matter. Their perspective provides all kinds of interesting details and parallel narratives.

Joshua Brown was a Tesla enthusiast and had signed up to be a test driver. This meant he knew he was testing software; it wasn’t ready for the market yet. From Tesla’s perspective, what seems to count is how many millions of miles their car logged before an accident occurred, which may have not been the best way to lead with a report on Brown’s death.

Key for engineers is perhaps the functioning of the sensors that could not distinguish between a bright sky and a bright, white trailer. Possibly, the code analysing the sensor data hasn’t been trained well enough to make the distinction between the bright sky and a bright, white trailer. Interestingly, this is the sort of error a human being wouldn’t make; just as we know that humans can distinguish between a Labrador and a Dalmatian but computer programs are only just learning how to. Clearly, miles to go ….

A key detail in this case is about the nature of auto-pilot and what it means to engage this mode. Tesla clearly states that its auto pilot mode means that a driver is still in control and responsible for the vehicle:

“[Auto pilot] is an assist feature that requires you to keep your hands on the steering wheel at all times,” … you need to maintain control and responsibility for your vehicle while using it. Additionally, every time that auto pilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.”

What Tesla is saying is that they’re not ready to hand over any responsibility or autonomy to machines. The human is still very much ‘in the loop’ with autopilot.

I suspect the law will need to wrangle over what auto pilot means in the context of road transport and cars as opposed to auto pilot in aviation; this history has been traced bt Madeleine Elish and Tim Hwang. They write that they “observe a counter intuitive focus on human responsibility even while human action is increasingly replaced by automation.”

There is a historical tendency to ‘praise the machine and punish the human’ for accidents and errors. Their recommendation is for a reframing of the question of accountability to increase the web of actors and their agencies rather than just vehicle and driver. They “propose that the debate around liability and autonomous systems be reframed more precisely to reflect the agentive role of designers and engineers and the new and unique kinds of human action attendant to autonomous systems. The advent of commercially available autonomous vehicles, like the driverless car, presents an opportunity to reconfigure regimes of liability that reflect realities of informational asymmetry between designers and consumers.”

This is so important and yet I find it difficult to see how, even as a speculative exercise, how you’d get Elon Musk to acknowledge his own role or that of his engineers and developers in accountability mechanisms. It will be interesting to watch how this plays out in the American legal system, because eventually there are going to have to be laws that acknowledge shared responsibility between humans and machines, just as robots needs to be regulated differently from humans.

The Problem with Trolleys at re:publica

I gave my first talk about ethics and driverless cars for a non-specialist audience at re:publica 2016. In this I look at the problem with the Trolley Problem, the thought experiment being used to train machine learning algorithms in driverless cars. Here, I focus on the problem that logic-based notions of ethics has transformed into an engineering problem; and suggest that this ethics-as-engineering approach is what will allow for American law and insurance companies to assign blame and responsibility in the inevitable case of accidents. There is also the tension that machines are assumed to be correct, except when they aren’t, and that this sits in a difficult history of ‘praising machines’ and ‘punishing humans’ for accidents and errors. I end by talking about questions of accountability that look beyond algorithms and software themselves to the sites of production of algorithms themselves.

Here’s the full talk.

Works cited in this talk:

1. Judith Jarvis Thompson’s 1985 paper in the Yale Law Journal,The Trolley Problem
2. Patrick Lin’s work on ethics and driverless cars. Also relevant is the work of his doctoral students at UPenn looking at applications of Blaise Pascal’s work to the “Lin Problem”
3. Madeleine Elish and Tim Hwang’s paper ‘Praise the machine! Punish the human!’ as part of the Intelligence & Autonomy group at Data & Society
4. Madeleine Elish’s paper on ‘moral crumple zones’; there’s a good talk and discussion with her on the website of the proceedings of the WeRobot 2016 event at Miami Law School.
5. Langdon Winner’s ‘Do Artifacts Have Politics’
6. Bruno Latour’s Actor Network Theory.

Experience E

How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:

“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”

Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?

Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell[1]:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.

This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)

[1] http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/?utm_source=medium&utm_medium=post&utm_content=chapterlink&utm_campaign=republish