Category Archives: tech

Accident Tourist on Cyborgology

Crossposting my latest piece on Cyborgology.
Accident Tourist: Driverless car crashes, ethics, and machine learning is an essay that attempts to unpack a particular narrative of ethics that has been constructed around driverless car technology. In this, I show that ethics has been constructed as an outcome of machine-learning software rather than developed as a framework of values. How can we read this ethics-as-software in the case of crashes, such as the Tesla crash from May 2016? How does ethics play out in determining accountability and responsibility (for car crashes), which I claim is a powerful force determining the construction of ethics in AI. Looking at the history of accountability for aviation crashes, I conclude that the notion of accountability in AI cannot be output- or outcome-driven, but should instead encompass the entanglements between machine and human agents working together.

2016. Writing, Publishing.

I wrote different kinds of things this past year. Here they are from most recent.

Rounded off the year with the daily beat ‘Reflected‘ about visitors and special guests in the #glassroom in New York

What is the city when it is made for autonomous vehicles with AI? Over @cyborgology

Started writing for @cyborgology about cinema,cybernetics,automation & cars about a visit to a BMW car factory

Published @Info_Activism ‘s digisec research abt digisec trainers& security in context by Carol Waters and Becky Kazansky

Privacy, visibility,anonymity:Dilemmas in activists’ tech use. New publication from me, @jsdeutch @schultjen

2016 started with the White Room @Nervous Systems:Quantified Life&the Social Question (text not available online)

33c3 Talk Notes + Video: Entanglements (Ethics in the Data Society)

ENTANGLEMENTS. MACHINE INTELLIGENCE, HUMAN ETHICS, AND DRIVERLESS CARS
(original title: ‘Ethics in the data society’)
Notes for a talk given at 33c3 in Hamburg, Dec 29, 2016
(Video here till it moves to Youtube? )

This talk is not about what the answer should be to the question “how do we program driverless cars to make ethical decisions?” This talk is about how and why we’re asking that question in this way. What do we really mean when we say ‘ethics’? This talk assembles some recent examples of how a narrative around ethics in the context of driverless cars is being developed, and ask if we are really talking about ethics, or about managing the risks we perceive are associated with artificial intelligence.

What are some of the ways in which the narrative around ethics is being constructed?

Ethics as the outcome of moral decision-making
Ethics as accountability for errors, breakdowns, accidents.
Ethics as a way to regulate the risks we perceive associated with AI; ethics as the way to construct a rationale for the financialisation of this risk by various regulatory industries.
Ethics in technology design
And the suggestion perhaps that ethics is constituted by multiple factors and actors and is produced locally and contextually.

Driverless cars are being developed with not-very-robust software, and in the open (like Uber’s testing and development in Pittsburgh), and there is little clarity about how safe these technologies are. Moreover, what we know about ethical considerations about AI tends to come from SF – speculative (or,science) fiction. We know about AI going rogue, from HAL in Space Odyssey 2001, to Ava in Ex Machina. With Ava in Ex Machina we see that the AI passes the Turing Test by behaving in a ruthless, cunning and unethical way in order to actual survive – one of the most human things it did.

Thus alongside expectations of precision and rationality through computing, we also have fear, fantasy and anxiety with respect to what we think intelligent machines can and will do.

1. Ethics as an outcome of software

“Technology will come to the rescue of its naughty but very clever children” (Donna Haraway) is one way to see this idea that machine learning will enable us to eventually get machines to learn what the appropriate response is to a situation in which a driverless car may be involved in an accident. There is a history to AI ethics in which ethics is expected to be the outcome of a computer program. In the past this had to be hard-coded in, now, ethics as moral decision making could effectively be ‘learned’ by showing an algorithm different ways to act in cases of

There is a desire to find models for programming ethics. So, the Trolley Problem became popular with Google’s selfdriving car project. There is a new application of it in a project from MIT called the Moral Machine Project . The Trolley Problem is a 1960s thought experiment in which a decision should be made between two difficult choices, in the case of a potential accident to have either five people killed, or one. The Trolley Problem pits consequentialist (outcomes are more important therefore saving five people is more important) with Kantian, deontological ethics (what’s more important is how you arrive at the reasons for saving those five people; the rules you use to arrive at a decision). Moral Machine is a project that you can play the Trolley Problem, and it is also part of a research study being done at MIT.

However, one of the creators of this project came out saying this which makes you think maybe even he thinks it isn’t about easy choices:

“At this point, we are looking into various forms of shared control, which means that any accident is going to have a complicated story about the exact sequence of decisions and interventions from the vehicle and from the human driver.” – Jean Francois Bonnefon, Dec 27, 2016, in Gizmodo (ref below)

(Also useful to ask what Moral Machine will do with all the data it collects from its online scenarios)

So, perhaps it isn’t as simple as making a choice between one or the other. So others have suggested approaches such as Pascalian programming (Bhargava 2016) in which the outcomes of a potential accident are ranked and rated, and the software makes a decision more ‘randomly’ based on the context (for example what kind of car, how many people in each, climatic conditions etc). In Sepielli’s Pascalian approach, applied by Bhargava, you can even factor in damage to the driverless car and driver, which the Trolley Problem does not do, and which is generally anathema to car manufacturers.

There is also the Ethics Bot suggested by Etzioni and Etzioni (2016) in which they suggest a way for a bot to manage algorithms to create appropriate and personalised ethical standards. They suggest this based on users’ use of the home thermostat, Nest. Being conscious of energy use and managing heating through Nest, a person is more ethical because they care about the environment. However, there are many assumptions in this to suggest that regulating heating has to do with ethical environmental behaviour. The Ethics Bot will monitor energy use and find patterns that are assumed to be the individual’s moral standard. Algorithms in Nest regulating heat/energy use will be regulated by the Ethics Bot.

2. What do we think of machines that learn?
Moral Machines, Ethics Bots, and the Pascalian Approach to Programming for Uncertainty are all approaches that expect that ethics will be an outcome of software programming, of Big Data thinking. That if you show a database enough options and situations of how to act, it will learn the appropriate ethical decision in the case of an accident. But is this “ethics”, really? What’s the problem with the idea of programming ethics or expecting the machine to give us ethical responses? And if you want to rely on machine learning and vast data sets to make things more personalised, then the question we have to ask is an update of Turing’s famous question; what do we think about machines that think needs to be updated to what do we think about machines that learn? What we know is that machines, like children and primates, as almost-minds, as Donna Haraway put it. They learn quite literally. We have already seen that machine learning is pretty basic, and reproduce biases existing in data. So why are we expecting that enough training data gathered from (what?) sources are going to give us ‘ethical’ results produced by machines?

3. It’s almost like we’re building and developing systems to be machine readable, for computers, models, databases, simulations, to correspond with each other. One of the things I’m interested in is how we reshape systems for the ascendancy of intelligent machines. Perhaps there are other questions we should be asking about ethics, like what does it mean to have AI in autonomous vehicles?

For example what is the future of cityspace when it is build for driverless cars? (Read more on the Cyborgology blog )

4. Ethics-as-accountability ; Ethics and financialisation of risk
There is another way in which we talk about ethics, that is, in terms of accountability. Again, starting from framing the car in terms of its involvement in accidents, who is accountable and responsible, and what is a way to think about accountability? Accountability before the law, as well as accountability in terms of insurance payouts, product liabilities.

There is a history to car crashes being simulated and modelled from the 1990s onwards, as a way to save money on car crash testing, and to anticipate how accidents take place. Is it possible that we will see risk associated with driverless car crashes being modeled and projected and assigned financial value.

In the 1990s-2000s there was increased use of models and mathematical simulations of crashes by car manufacturers.Nigel Gale quotes an American car maker talking about this.

”Road to lab to math is basically the idea that you want to be as advanced on the evolutionary scale of engineering as possible. Math is the next logical step in the process over testing on the road and in the lab. Math is much more cost effective because you don’t have to build pre-production vehicles and then waste them. We’ve got to get out in front of the technology so it doesn’t leave us behind. We have to live and breathe math. When we do that, we can pass the savings on to the consumer.” (Gale 2005)

There will be increased involvement of different actors in regulating the behaviour of driverless cars: industry level, new regulatory environments, urban and civic infrastructure, public education, law enforcement, ways to deal with massive levels of social change and disruption that cars will bring. All of these will have some role to play in the legalisation and regulation of ethical behaviour. It may not be ‘ethics’ but may be claimed as such.

5. Ethics as accountability and design
Another aspect to ethics-as-accountability is to do with design;the idea that accountability lies with the designers and developers of technology.
There is value perceived in showing who the ‘man behind the curtain’ is, to assign some accountability to them/him and to show how design carries values of designers in it.

There has been long a consideration around AI and ethics, and fears of AI going rogue. There are also ethics related to the work of scientists and technical workers.
This is where cinema and literature and fantasy come in as well.

Lots of recent examples of how technology design by mostly white male communities does not reflect the realities of the world of users; so for example, assistants like Google Now will be able to assist you if you said I’m having a heart attack, or if you commit suicide; but if you were to say I’m being raped, then the response is, “I’m sorry, I don’t know what rape is”. Or the story of the Apple Watch that one of its cool features may not work for people of colour because of how it was tested on white people.

So there are lots of examples of technology merely reflecting the biases of its designers. Is this really tenable with driverless cars when you have so many different actors and millions of lines of code? It’s a complex feat to identify every actor and factor in a car.

One of the most interesting recent documents that seems to show a serious attempt to think about ethics in artificially intelligent /autonomous systems is from IEEE, and is called Ethically Aligned Design

6. Ethics as produced contextually
So is there a way to think about ethics in a different way, as something that is produced more contextually and locally? This is smething of an ambition to see if you can do this.

“technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making. Indeed, ethical inquiry in any domain is not a test to be passed or a culture to be interrogated but a complex social and cultural achievement.” (Ananny 2010)

5. I recently did a workshop with designers and programmers working in a tech company sort of related to autonomous driving. I am interesting in playing with different methods and approaches right now and I’m really interested in Scenario Planning and Design Fiction as ways to do this.

Would it be possible for designers and programmers to consider ethics contextually, locally, and imaginatively in relation to actual things that could happen a few years from now? Here are the cases I worked through with a group of designers and programmers recently:

– Build a map layer for use by parents and children to share transportation
– How could a car ride sharing fleet service build an option for women users to feel safe?
– What are the issues in a way finding or route finding service for autonomous cars that avoids ‘high crime neighbourhoods’?

So what was interesting about the responses was that the designers and programmers I’ve talked to are highly aware of what the issues and concerns are, but feel they are part of a corporate entity that eventually has to make money by selling things in a certain way. Paula Bialski’s work also shows that programmers in tech companies like this one are looking for spaces for micro resistance and to do micropolitics.

References
Ananny, M (2016). Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values 2016, Vol. 41(1) 93-117
Bhargava, V (2016, forthcoming) What if Blaise Pascal designed autonomous vehicles? in Roboethics 2.0
Cramer, F (2016) Crapularity Hermeneutics. http://cramer.pleintekst.nl/essays/crapularity_hermeneutics/#fnref37
Etzioni, A and Etzioni, O (2016) AI Assisted Ethics. Ethics of Information Technology 18:149-156
Gale, N (2005) ‘Road-to-lab-to-math: A New Path to Improved Product’,in Automotive Engineering International (May): 78–79.
Weiner, S. (2016) http://gizmodo.com/if-a-self-driving-car-kills-a-pedestrian-who-is-at-fau-1790049637

Machine Research workshop w/ Constant, Aarhus U, Transmediale

I’m in Brussels with a group of fellow PhDs, academics, artists and technologists, at a workshop called Machine Research organised by Constant, Aarhus University’s Participatory IT centre, and Transmediale.

The workshop aims to engage research and artistic practice that takes into account the new materialist conditions implied by nonhuman techno-ecologies including new ontologies of learning and intelligence (such as algorithmic learning), socio-economic organisation (such as blockchain), population management and tracking (such as datafied borders), autonomous or semi-autonomous systems (such as bots or drones) and other post-anthropocentric reconsiderations of agency, materiality and autonomy.

I wanted to work on developing a subset of my ‘ethnography of ethics’ with a focus on error, and trying to think about what error means and is managed in the context of driverless car ethics. It’s been great to have this time to think with other people working on related – and very unrelated – topics. It is the small things that count,really; like being able to turn around and ask someone: “what’s the difference between subjection, subjectivity, subjectification, subjectivization?”. The workshop was as much about researching the how of machines as it was about the how of research. I appreciated some encouraging thoughts and questions about what an ‘ethnography’ means as it relates to ethics and driverless cars; as well as a fantastic title for the whole thing (thanks Geoff!!).

Constant’s work involves a lot of curious, cool, interesting publishing and documentation projects, including those of an Oulipo variety. So one of the things they organised for us was etherpads. I use etherpads a lot at work, but for some people this was new. It was good seeing pads in “live editing” mode, rather than just for storage and sharing. We used the pads to annotate everyone’s presentations with comments, suggestions, links, and conversation. They had also made text filters that performed functions like deleting prepositions (the “stop words” filter), or based on Markov chains (Markov filter):

“by organizing the words of a source text stream into a dictionary, gathering all possible words that follow each chunk into a list. Then the Markov generator begins recomposing sentences by randomly picking a starting chunk, and choosing a third word that follows this pair. The chain is then shifted one word to the right and another lookup takes place and so on until the document is complete.”

This is the basis of spam filters too.

In the course of the workshop people built new filters, like Dave Young (who is doing really fascinating research on institutionality and network warfare in the US during the Cold War through the study of its grey literature like training manuals) who made an “Acronymizer”, a filter that searches for much-used phrases in a text and creates acronyms from them.

We’ve also just finished creating our workshop “fanzine” using Sarah Garcin’s Publication Jockey, an Atari-Punk, handmade, publication device made with a Makey Makey and crocodile clips. The fanzine is a template and experiment for what we will produce at Transmediale. Some people have created entirely new works based on applying their machine research practices to pieces of their own text. Based on the really great inputs I got, I rewrote my post as a series of seven scenarios to think about how ethics may be produced in various sociotechnical contexts. There’s that nice ‘so much to think about’ feeling! (And do, of course).

Sarah Garcin's Publication Jockey, PJ
Sarah Garcin’s Publication Jockey, PJ

New: Privacy, Visibility, Anonymity: Dilemmas in Tech Use by Marginalised Communities

I started this Tactical Tech project two years ago and am thrilled to see it finally out. Research takes time! This is a synthesis report of two case studies we did in Kenya and South Africa on risks and barriers faced by marginalised communities in using technology (primarily in transparency and accountability work). You can download the report on the Open Docs IDS website here

The Tesla Crash

It’s happened. A person has died in a an accident involving a driverless car, raising difficult questions about what it means to regulate autonomous vehicles, to negotiate control and responsibility with software that may one day be very good, but currently is not.

In tracing an ethnography of error in driverless cars, I’m particularly interested in how error happens, is recorded, understood, regulated and then used as feedback in further development and learning. So the news of any and every crash or accident becomes a valuable moment to document.

What we know from various news reports is that 40 year old Joshua Brown was, supposedly, watching a Harry Potter DVD while test-driving his Tesla with the autopilot mode enabled, when the car slammed into the under-side of a very large trailer truck. Apparently the sensors on the car could not distinguish between the bright sky, and the white of the trailer truck. The top of the car was sheared off as it went fast under the carriage of the trailer and debris was scattered far.

Here’s an excerpt from a Reuters report of the crash from the perspective of a family whose property parts of the car landed in:

“Van Kavelaar said the car that came to rest in his yard next to a sycamore tree looked like a metal sardine can whose lid had been rolled back with a key. After the collision, he said, the car ran off the road, broke through a wire fence guarding a county pond and then through another fence onto Van Kavelaar’s land, threaded itself between two trees, hit and broke a wooden utility pole, crossed his driveway and stopped in his large front yard where his three daughters used to practice softball. They were at a game that day and now won’t go in the yard. His wife, Chrissy VanKavelaar, said they continue to find parts of the car in their yard eight weeks after the crash. “Every time it rains or we mow we find another piece of that car,” she said.”

People in the vicinity of a crash get drawn into without seeming to have a choice in the matter. Their perspective provides all kinds of interesting details and parallel narratives.

Joshua Brown was a Tesla enthusiast and had signed up to be a test driver. This meant he knew he was testing software; it wasn’t ready for the market yet. From Tesla’s perspective, what seems to count is how many millions of miles their car logged before an accident occurred, which may have not been the best way to lead with a report on Brown’s death.

Key for engineers is perhaps the functioning of the sensors that could not distinguish between a bright sky and a bright, white trailer. Possibly, the code analysing the sensor data hasn’t been trained well enough to make the distinction between the bright sky and a bright, white trailer. Interestingly, this is the sort of error a human being wouldn’t make; just as we know that humans can distinguish between a Labrador and a Dalmatian but computer programs are only just learning how to. Clearly, miles to go ….

A key detail in this case is about the nature of auto-pilot and what it means to engage this mode. Tesla clearly states that its auto pilot mode means that a driver is still in control and responsible for the vehicle:

“[Auto pilot] is an assist feature that requires you to keep your hands on the steering wheel at all times,” … you need to maintain control and responsibility for your vehicle while using it. Additionally, every time that auto pilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.”

What Tesla is saying is that they’re not ready to hand over any responsibility or autonomy to machines. The human is still very much ‘in the loop’ with autopilot.

I suspect the law will need to wrangle over what auto pilot means in the context of road transport and cars as opposed to auto pilot in aviation; this history has been traced bt Madeleine Elish and Tim Hwang. They write that they “observe a counter intuitive focus on human responsibility even while human action is increasingly replaced by automation.”

There is a historical tendency to ‘praise the machine and punish the human’ for accidents and errors. Their recommendation is for a reframing of the question of accountability to increase the web of actors and their agencies rather than just vehicle and driver. They “propose that the debate around liability and autonomous systems be reframed more precisely to reflect the agentive role of designers and engineers and the new and unique kinds of human action attendant to autonomous systems. The advent of commercially available autonomous vehicles, like the driverless car, presents an opportunity to reconfigure regimes of liability that reflect realities of informational asymmetry between designers and consumers.”

This is so important and yet I find it difficult to see how, even as a speculative exercise, how you’d get Elon Musk to acknowledge his own role or that of his engineers and developers in accountability mechanisms. It will be interesting to watch how this plays out in the American legal system, because eventually there are going to have to be laws that acknowledge shared responsibility between humans and machines, just as robots needs to be regulated differently from humans.

The Problem with Trolleys at re:publica

I gave my first talk about ethics and driverless cars for a non-specialist audience at re:publica 2016. In this I look at the problem with the Trolley Problem, the thought experiment being used to train machine learning algorithms in driverless cars. Here, I focus on the problem that logic-based notions of ethics has transformed into an engineering problem; and suggest that this ethics-as-engineering approach is what will allow for American law and insurance companies to assign blame and responsibility in the inevitable case of accidents. There is also the tension that machines are assumed to be correct, except when they aren’t, and that this sits in a difficult history of ‘praising machines’ and ‘punishing humans’ for accidents and errors. I end by talking about questions of accountability that look beyond algorithms and software themselves to the sites of production of algorithms themselves.

Here’s the full talk.

Works cited in this talk:

1. Judith Jarvis Thompson’s 1985 paper in the Yale Law Journal,The Trolley Problem
2. Patrick Lin’s work on ethics and driverless cars. Also relevant is the work of his doctoral students at UPenn looking at applications of Blaise Pascal’s work to the “Lin Problem”
3. Madeleine Elish and Tim Hwang’s paper ‘Praise the machine! Punish the human!’ as part of the Intelligence & Autonomy group at Data & Society
4. Madeleine Elish’s paper on ‘moral crumple zones’; there’s a good talk and discussion with her on the website of the proceedings of the WeRobot 2016 event at Miami Law School.
5. Langdon Winner’s ‘Do Artifacts Have Politics’
6. Bruno Latour’s Actor Network Theory.

Experience E

How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:

“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”

Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?

Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell[1]:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.

This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)

[1] http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/?utm_source=medium&utm_medium=post&utm_content=chapterlink&utm_campaign=republish