#SpivakConfidential: “I was so trashed in Dubrovnik” and other anecdotes and insights from her Berlin lecture

This week I went to a public lecture by Gayatri Chakravorty Spivak titled Who Claims Borderlessness as part of the Berliner Gazette event, Tacit Futures. I wish I could say I knew her work really well but I don’t. I’ve read some bits of her here and there, the famous stuff, but years ago. I had never seen her speak before, so I was charmed by her performance of being Spivak.

Spivak talked about loving being an academic, and those who don’t should stop whining, and leave it. I think she had been cautioned against being too academic in her talk, so she sarcastically signposted every reference to approaching academicspeak: “And so this is what we call – watch out here is an academic term – performative contradiction.” And so on.

She was always conscious of her privilege and who and where she teaches, and as a caste-Hindu. She made frequent reference to her position at Columbia as the only woman of colour, and one of fifteen full University Professors. It’s kind of astounding though that she joined Columbia in the 1990s but was made University Professor only nine years ago. In talking about herself and her family, she was careful and sincere in talking about the generations of caste privilege that enabled her to be where she is.

She talked a lot about her work in rural Bengal,and contrasted life and teaching there, to teaching at Columbia. At some point she wanted to tell a story about her sister and turned and asked “anyone here know Bengali” and for some reason I put up my hand and said I could follow a little Bangla, suddenly wracked by a momentary, yawning-abyss type panic, that she would expect me to converse with her in Bangla. She shielded her eyes and looked out into the audience and said “West or East?” I said “Neither; I can follow it when people, friends, talk”. She turned to the audience and said “see, pah, you all think you’re so global and only one person has Bengali friends.. but then who cares about Bengalis, who knows us..” The anecdote about the sister was swept away and she moved on to the next one.

She and had a lot of anecdotes that were interesting and amusing, but to a mostly European audience probably inaccessible. I really liked how she didn’t footnote any of these anecdotes and left it up to the audience to figure things out.

Also, Spivak just got 91/100 in her Mandarin Oral exam. She chatted in Mandarin with some people in the audience, just like that. That was impressive. She made a wonderful point about the borders around language itself, which are difficult, but sometimes demand respect, but are also porous, cutting through class differences. She referenced being from East Bengal/Bangladesh and how that more Eastern Bangla keeps finding co-locutors in Indian West Bengal where she teaches, thus creating new bonds and connections. About Mandarin, she had a different point, about the border of the script itself, and that identifying something as Mandarin or Japanese is merely an attempt to indicate a fake globality; that the border of the language must be probed and approached to learn how to cross it.

About borders, boundaries, frontiers, and displacement: she had a lot to say including not-so-gently berating Europeans for creating conditions for a new colonialism in positioning Europe as a place of liberation, thereby Othering. And that the ‘Refugees Welcome’ slogan is effectively an inversion of where the Right is, and thereby centering the role of the German/European state(s) as a saviour and liberator. She was generally dismissive, I felt, of efforts to welcome refugees. No one asked her what we should be doing instead.

Some of her one-liners were hilarious insights into this interesting, difficult, person. I liked the meandering-into anecdotes and how they would come back to populate the points she was making. She was, is, by turn, scolding, charming, full of herself, hitting low (“You know, right, there is no such thing as Aryan”), hitting high, laser-sharp. I can’t say I always agree but I was definitely laughing right through. I think the combination of anecdote, reflection, theory and opinion made for an engaging, entertaining talk.

So, here is #SpivakConfidential:

“I run with the blonde-haired, blue-eyed boys, I teach at the top.”

“Ramachandra Guha wrote a stupid book”

[To the Germans] “It’s not Shpivak, it’s Spivak”

“I saw Deleuze excoriate a person because they used ‘vous’ and not ‘tu’ with him”

“I was so trashed in Dubrovnik”

“There is no such thing as Aryan. Remember that”

“Do you know [unclear Italian name].. the semiologist? He was the teacher of Umberto Eco. We were involved.. but I wouldn’t marry him because he was too rich.”

[Spivak trashes Madhu Kishwar] “She did nothing”

“Love me, love me, love me, you know I’m a liberal!” [Spivak ends lecture singing sarcastically.]

“I have made up Bengali words for things like ‘ontological difference’. Otherwise how can I teach in the village?!”

“I was on the same flight from Paris as Michael Ryan who had a huge lump of hash in his jacket pocket that the students had given him… he kicked it under the trash, somewhere, before we got to immigration. And they took me away for questioning, strip searched!Fingers in orifices looking for drugs! Hah, Ryan has an American passport, see!”

“If you’re a brown woman who is about to be strip searched for drugs in an American airport, just say you only want someone from your embassy to do it. Then, you’ll be fine. They’ll never send someone from the Indian embassy … [chuckles].”

“You want to stop? No more questions? Come on, give me more. I’m high on adrenaline, I can go on.”

(Image from Dawn http://www.dawn.com/news/1152482 and glitched with an online glitcher)

FILTERED TEXT W/ CONSTANT

At the Machine Research workshop, we played with text filters developed by Constant as a way to explore machinic actions on various texts. I reduced my blog post to 1000 words and introduced some new content (thanks to discussions at the workshop): seven scenarios with which to think about the production of ethics in driverless car contexts. This post starts with the original text followed by the filtered texts. Some of the filtered texts become such beautiful gibberish.

ORIGINAL
This work argues that ethics in driverless cars is produced by a complex assemblage of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering that constitute socio-technical frameworks for accountability. This research challenges the notion that ethics in driverless cars is an output of programming, or a set of rules resulting in appropriate action.

As Mike Ananny says, “technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making. Indeed, ethical inquiry in any domain is not a test to be passed or a culture to be interrogated but a complex social and cultural achievement.” (emphasis in original 2016 p 96). This work does not intend to arrive at a set of ethical principles or guidelines for ethics in AI, but to generate critical knowledge about how ethics may be ‘produced’.

Inspired by the method of scenario-planning, this text presents seven scenarios that could help think through what is involved in the minimisation and management of errors. The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Paul Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism” (2014). Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. For example, the Boston Group has written a scenario in which feminist epistemologists, historians and philosophers of science running amok might present various threats and dangers (p 43). More recently. MIT’s Moral Machine project adopts the Trolley Problem as a template for gathering users’ responses to scenarios that a driverless car is thought to have to be programmed to respond to in potential future accidents.

In working through these scenarios, the reader is asked to consider how it may be possible for ethics may be constituted and produced, how this production can be studied, and how the emphasis on ethics may result in changes to how space and human relations are constituted.

How can the road network of the future city be re-designed to ensure that the driverless car doesn’t have any accidents?

Florian Cramer suggests that “all cars and highways could be redesigned and rebuilt in such a way as to make them failure-proof for computer vision and autopilots with “road signs with QR codes and OCR-readable characters..straight[ening] motorways to make them perfectly linear.” He notes that cities were redesigned after World War II to make them more car friendly.

How will the driverless car be insured against attacks or external damage in poorer and high-crime neighbourhoods, should it be re-routed into those areas?

Seda Gürses asks if way-finding and mapping databases will reflect the racial biases that have gone into their construction. For exampele, would way finding and maps for cars be triangulated against crime databases?

Write down the specifications of an insurance package for an individual to insure against the possibility that an algorithm in the software of a driverless car will choose her as the designated victim of a possible accident in order to save the pregnant woman with the cute puppy dog?

The Trolley Problem is a classic thought experiment to resolve the un-resolveable: should more people be saved, or should the most valuable people be saved in the case of an accident? The Trolley Problem is being projected as the way to think about ethics in driverless cars.

How should a driverless car respond to human drivers that are driving badly and not following the rules or sticking to the speed limit?
Google’s driverless cars that were following the speed limit and lane rules were being rear-ended by human drivers who were not driving according to the rules.

Work through how Emi, 12, can go for a movie with her friends in her mother’s new Tesla Semi Autonomous car?

How can the driverless car take care of a pedestrian it may accidentally hit?
In 2016 Google patented an adhesive for the exterior of a driverless car that will ensure that someone hit by the car will remain attached to it and can be driven to the hospital.

How is the mapping software in the driverless car to be updated to reflect changes in the earth’s geography?

Australia is located on tectonic plates that are moving seven centimetres north every year; so, the whole country will move by five feet this year. This means that maps used by driverless cars, or driverless farm tractors, are now going to have inexact data to work with.

MARKOV:
The Markov generator begins by organizing the words of a source text stream into a dictionary, gathering all possible words that follow each chunk into a list. Then the Markov generator begins recomposing sentences by randomly picking a starting chunk, and choosing a third word that follows this pair. The chain is then shifted one word to the right and another lookup takes place and so on until the document is complete. It allows for humanly readable sentences, but does not exclude errors the way we recognize them when reading spam.

and autopilots with her as to respond to the cute puppy dog? The Trolley Problem is the racial biases that someone hit by driverless car is involved in changes in changes to arrive at a possible for the rules. Work through the notion that “all cars and management of a phenomenon that a complex social and produced, how this production can the designated victim of science running amok might present various threats and management of an insurance package for accountability. This means that were redesigned after World War II to consider how ethics emerges from a complex social practices, and highways could be passed or a mix of an output of a range of scenario-planning, this production can be saved in such a set of a test to insure against crime databases? Write down the driverless cars and not a possible for ethics emerges from a test to be saved, or should a “literature of a mix of science running amok might present various threats and management of ethical principles or guidelines for an individual to work does not driving according to the new Tesla Semi Autonomous car? How will move by a scenario in appropriate action. As Mike Ananny says, “technology ethics in the cute puppy dog? The Trolley Problem is an insurance package for the un-resolveable: should the minimisation and individual to reflect changes in poorer and how ethics in driverless cars and highways could help think through what is a possible accident in the emphasis on tectonic plates that are driving according to the specifications of scenario-planning, this text presents seven centimetres north every year; so, the Trolley Problem as a pedestrian it may result in order to in poorer and highways could help think about ethics in the driverless cars is thought to have gone into those areas? Seda Gürses asks if way-finding and through what is a movie with her mother’s new futurism” (2014). Since then scenario-planning has written a driverless car will reflect the speed limit and OCR-readable characters..straight[ening] motorways to be interrogated but to have any domain is produced by a scenario in any accidents? Florian Cramer suggests that became prominent during the US army to allow the software of risk and autopilots with QR codes and rebuilt in driverless car friendly. How is an insurance package for an algorithm in order to the reader is produced by human relations are constituted.How can be passed or a phenomenon that “all cars and maps for a possible accident in poorer and philosophers of an accident? The ‘scenario’ is an individual decision making. Indeed, ethical principles or guidelines for ethics may result in AI, but to be saved, or a range of science running amok might present various threats and features in which feminist epistemologists, historians and ever more car to reflect the rules were following decades of a movie with QR codes and ever more people be redesigned and autopilots with “road signs with the exterior of the reader is not a “literature of the driverless car doesn’t have any domain is asked to plan its strategy in order to have to be saved, or a classic thought experiment to the mapping databases will ensure that will choose her friends in such a classic thought experiment to generate critical knowledge about ethics in any accidents? Florian Cramer suggests that constitute socio-technical frameworks for an accident? The ‘scenario’ is a driverless cars. How is located on ethics may accidentally hit? In 2016 Google patented an insurance package for accountability. This means that ethics in original 2016 Google patented an accident? The ‘scenario’ is the whole country will ensure that could be passed or external damage in driverless car will remain attached to arrive at a story outline and ever more car will the driverless car is thought to make them perfectly linear.” He notes that maps for an output of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and management of errors. The Trolley Problem is a template for ethics may accidentally hit? In 2016 p 96). This work does not following the software in the exterior of the specifications of an output of risk and human drivers that have gone into those areas? Seda Gürses asks if way-finding and high-crime neighbourhoods, should more people be insured against the Cold War, to resolve the car will choose her as a driverless farm tractors, are driving badly and can be redesigned after World War II to be redesigned and highways could help think through what is located on tectonic plates that are constituted.How can be saved, or driverless cars that ethics in appropriate action. As Mike Ananny says, “technology ethics in any accidents? Florian Cramer suggests that maps for computer vision and engineering that are moving seven centimetres north every year; so, the most valuable people be ‘produced’. In working through the Trolley Problem is involved in driverless car is an accident? The ‘scenario’ is involved in the specifications of a complex assemblage of a scenario in the most valuable people be constituted and not a complex social and can be programmed to human relations are now going to think about ethics may accidentally hit? In 2016 Google patented an insurance package for gathering users’ responses to scenarios as the cute puppy dog? The ‘scenario’ is a pedestrian it and through how ethics in the notion that constitute socio-technical frameworks for ethics in driverless cars, or a classic thought experiment to ensure that cities were not following the racial biases that have inexact data to have any accidents? Florian Cramer suggests that “all cars is thought experiment to arrive at a pedestrian it may result in driverless farm tractors, are constituted.How can go for ethics in driverless cars. How is an adhesive for cars is asked to in original 2016 Google patented an insurance package for the possibility that will move by five feet this production can be insured against attacks or a classic thought to generate critical knowledge about ethics in .

ACRONYMIZER:
Ever feel that your text is too verbose? Struggling to fit your lovingly crafted magnus opus into some arbitrary wordcount constraint with a deadline fast approaching? Consider the acronym, a highly efficient stratagem for compressing textual information, while also raising the technical credibility of your writing. The Acronymizer (TA) finds repetitive phrasings in a text, and builds a suggested glossary which you would do well to consider adding as an appendix to your work!

ADC : A DRIVERLESS CAR
ASO : A SET OF
DCI : DRIVERLESS CARS IS
EID : ETHICS IN DRIVERLESS
EMB : ETHICS MAY BE
IDC : IN DRIVERLESS CARS
OAD : OF A DRIVERLESS
PBS : PEOPLE BE SAVED
TDC : THE DRIVERLESS CAR
TEI : THAT ETHICS IN
TMT : TO MAKE THEM
TPI : TROLLEY PROBLEM IS
TTP : THE TROLLEY PROBLEM

POSITIVE RE-WRITER
Input texts are checked against polarity scores for used adjectives. When the score is higher than 0.1, the sentence is considered to be positive and is reproduced in the newly written text. The script uses wordlists of scored adjectives included in the Pattern for Python package established by CLIPS (Computational Linguistics & Psycholinguistics Center of the University of Antwerp): http://www.clips.ua.ac.be/pattern.

Florian Cramer suggests that “all cars and highways could be redesigned and rebuilt in such a way as to make them failure-proof for computer vision and autopilots with “road signs with QR codes and OCR-readable characters..straight[ening] motorways to make them perfectly linear.” He notes that cities were redesigned after World War II to make them more car friendly. Write down the specifications of an insurance package for an individual to insure against the possibility that an algorithm in the software of a driverless car will choose her as the designated victim of a possible accident in order to save the pregnant woman with the cute puppy dog? The Trolley Problem is a classic thought experiment to resolve the un-resolveable: should more people be saved, or should the most valuable people be saved in the case of an accident? Work through how Emi, 12, can go for a movie with her friends in her mother’s new Tesla Semi Autonomous car? Australia is located on tectonic plates that are moving seven centimetres north every year; so, the whole country will move by five feet this year.

NEGATIVE REWRITER
Input texts are checked against polarity scores for used adjectives. When the score is lower than 0.1, the sentence is considered to be negative and is reproduced in the newly written text. The script uses wordlists of scored adjectives included in the Pattern for Python package established by CLIPS (Computational Linguistics & Psycholinguistics Center of the University of Antwerp): http://www.clips.ua.ac.be/pattern.

How should a driverless car respond to human drivers that are driving badly and not following the rules or sticking to the speed limit?

SENTIMENT_REDUCTION.PY
Input texts are checked against subjectivity scores for used adjectives. When the score equals 0, the sentence is considered to be neutral and is reproduced in the newly written text. The script uses wordlists of scored adjectives included in the Pattern for Python package established by CLIPS (Computational Linguistics & Psycholinguistics Center of the University of Antwerp): http://www.clips.ua.ac.be/pattern.

Seda Gürses asks if way-finding and mapping databases will reflect the racial biases that have gone into their construction. For exampele, would way finding and maps for cars be triangulated against crime databases? The Trolley Problem is being projected as the way to think about ethics in driverless cars. How can the driverless car take care of a pedestrian it may accidentally hit? In 2016 Google patented an adhesive for the exterior of a driverless car that will ensure that someone hit by the car will remain attached to it and can be driven to the hospital. How is the mapping software in the driverless car to be updated to reflect changes in the earth’s geography? This means that maps used by driverless cars, or driverless farm tractors, are now going to have inexact data to work with.

DISAPPEARANCE
This script goes through the input text word by word. Every duplicate word and its subsequent occurrence is removed, until the desired reduction is reached.

This work argues that ethics in driverless cars is produced by a complex assemblage of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering constitute socio-technical frameworks for accountability. research challenges the notion an output programming, or set rules resulting appropriate action. As Mike Ananny says, “technology emerges from mix institutionalized professional cultures, technological capabilities, practices, individual decision making. Indeed, ethical inquiry any domain not test to be passed culture interrogated but achievement.” (emphasis original 2016 p 96). does intend arrive at principles guidelines AI, generate critical knowledge about how may ‘produced’.

Inspired method scenario-planning, this text presents seven scenarios could help think through what involved minimisation management errors. The ‘scenario’ phenomenon became prominent during Korean War, following decades Cold allow US army plan its strategy event nuclear disaster. Paul Galison describes as “literature future war” “located somewhere between story outline ever more sophisticated role-playing war games”, “a staple new futurism” (2014). Since then scenario-planning has been adopted range organisations, features modelling risk identify For example, Boston Group written scenario which feminist epistemologists, historians philosophers science running amok might present various threats dangers (p 43). More recently. MIT’s Moral Machine project adopts Trolley Problem template gathering users’ responses car thought have programmed respond potential accidents.
In working these scenarios, reader asked consider it possible constituted produced, production can studied, emphasis on result changes space human relations are constituted.How road network city re-designed ensure doesn’t accidents? Florian Cramer suggests “all highways redesigned rebuilt such way make them failure-proof computer vision autopilots with “road signs QR codes OCR-readable characters..straight[ening] motorways perfectly linear.” He notes cities were after World War II friendly. How will insured against attacks external damage poorer high-crime neighbourhoods, should re-routed into those areas? Seda Gürses asks if way-finding mapping databases reflect racial biases gone their construction. exampele, would finding maps triangulated crime databases? Write down specifications insurance package insure possibility algorithm software choose her designated victim accident order save pregnant woman cute puppy dog? classic experiment resolve un-resolveable: people saved, most valuable saved case accident? being projected cars. drivers driving badly sticking speed limit? Google’s limit lane rear-ended who according rules. Work Emi, 12, go movie friends mother’s Tesla Semi Autonomous car? take care pedestrian accidentally hit? Google patented adhesive exterior someone hit remain attached driven hospital. updated earth’s geography? Australia located tectonic plates moving centimetres north every year; so, whole country move five feet year. means used cars, farm tractors, now going inexact data with.

Machine Research workshop w/ Constant, Aarhus U, Transmediale

I’m in Brussels with a group of fellow PhDs, academics, artists and technologists, at a workshop called Machine Research organised by Constant, Aarhus University’s Participatory IT centre, and Transmediale.

The workshop aims to engage research and artistic practice that takes into account the new materialist conditions implied by nonhuman techno-ecologies including new ontologies of learning and intelligence (such as algorithmic learning), socio-economic organisation (such as blockchain), population management and tracking (such as datafied borders), autonomous or semi-autonomous systems (such as bots or drones) and other post-anthropocentric reconsiderations of agency, materiality and autonomy.

I wanted to work on developing a subset of my ‘ethnography of ethics’ with a focus on error, and trying to think about what error means and is managed in the context of driverless car ethics. It’s been great to have this time to think with other people working on related – and very unrelated – topics. It is the small things that count,really; like being able to turn around and ask someone: “what’s the difference between subjection, subjectivity, subjectification, subjectivization?”. The workshop was as much about researching the how of machines as it was about the how of research. I appreciated some encouraging thoughts and questions about what an ‘ethnography’ means as it relates to ethics and driverless cars; as well as a fantastic title for the whole thing (thanks Geoff!!).

Constant’s work involves a lot of curious, cool, interesting publishing and documentation projects, including those of an Oulipo variety. So one of the things they organised for us was etherpads. I use etherpads a lot at work, but for some people this was new. It was good seeing pads in “live editing” mode, rather than just for storage and sharing. We used the pads to annotate everyone’s presentations with comments, suggestions, links, and conversation. They had also made text filters that performed functions like deleting prepositions (the “stop words” filter), or based on Markov chains (Markov filter):

“by organizing the words of a source text stream into a dictionary, gathering all possible words that follow each chunk into a list. Then the Markov generator begins recomposing sentences by randomly picking a starting chunk, and choosing a third word that follows this pair. The chain is then shifted one word to the right and another lookup takes place and so on until the document is complete.”

This is the basis of spam filters too.

In the course of the workshop people built new filters, like Dave Young (who is doing really fascinating research on institutionality and network warfare in the US during the Cold War through the study of its grey literature like training manuals) who made an “Acronymizer”, a filter that searches for much-used phrases in a text and creates acronyms from them.

We’ve also just finished creating our workshop “fanzine” using Sarah Garcin’s Publication Jockey, an Atari-Punk, handmade, publication device made with a Makey Makey and crocodile clips. The fanzine is a template and experiment for what we will produce at Transmediale. Some people have created entirely new works based on applying their machine research practices to pieces of their own text. Based on the really great inputs I got, I rewrote my post as a series of seven scenarios to think about how ethics may be produced in various sociotechnical contexts. There’s that nice ‘so much to think about’ feeling! (And do, of course).

Sarah Garcin's Publication Jockey, PJ
Sarah Garcin’s Publication Jockey, PJ

New: Privacy, Visibility, Anonymity: Dilemmas in Tech Use by Marginalised Communities

I started this Tactical Tech project two years ago and am thrilled to see it finally out. Research takes time! This is a synthesis report of two case studies we did in Kenya and South Africa on risks and barriers faced by marginalised communities in using technology (primarily in transparency and accountability work). You can download the report on the Open Docs IDS website here

The Tesla Crash

It’s happened. A person has died in a an accident involving a driverless car, raising difficult questions about what it means to regulate autonomous vehicles, to negotiate control and responsibility with software that may one day be very good, but currently is not.

In tracing an ethnography of error in driverless cars, I’m particularly interested in how error happens, is recorded, understood, regulated and then used as feedback in further development and learning. So the news of any and every crash or accident becomes a valuable moment to document.

What we know from various news reports is that 40 year old Joshua Brown was, supposedly, watching a Harry Potter DVD while test-driving his Tesla with the autopilot mode enabled, when the car slammed into the under-side of a very large trailer truck. Apparently the sensors on the car could not distinguish between the bright sky, and the white of the trailer truck. The top of the car was sheared off as it went fast under the carriage of the trailer and debris was scattered far.

Here’s an excerpt from a Reuters report of the crash from the perspective of a family whose property parts of the car landed in:

“Van Kavelaar said the car that came to rest in his yard next to a sycamore tree looked like a metal sardine can whose lid had been rolled back with a key. After the collision, he said, the car ran off the road, broke through a wire fence guarding a county pond and then through another fence onto Van Kavelaar’s land, threaded itself between two trees, hit and broke a wooden utility pole, crossed his driveway and stopped in his large front yard where his three daughters used to practice softball. They were at a game that day and now won’t go in the yard. His wife, Chrissy VanKavelaar, said they continue to find parts of the car in their yard eight weeks after the crash. “Every time it rains or we mow we find another piece of that car,” she said.”

People in the vicinity of a crash get drawn into without seeming to have a choice in the matter. Their perspective provides all kinds of interesting details and parallel narratives.

Joshua Brown was a Tesla enthusiast and had signed up to be a test driver. This meant he knew he was testing software; it wasn’t ready for the market yet. From Tesla’s perspective, what seems to count is how many millions of miles their car logged before an accident occurred, which may have not been the best way to lead with a report on Brown’s death.

Key for engineers is perhaps the functioning of the sensors that could not distinguish between a bright sky and a bright, white trailer. Possibly, the code analysing the sensor data hasn’t been trained well enough to make the distinction between the bright sky and a bright, white trailer. Interestingly, this is the sort of error a human being wouldn’t make; just as we know that humans can distinguish between a Labrador and a Dalmatian but computer programs are only just learning how to. Clearly, miles to go ….

A key detail in this case is about the nature of auto-pilot and what it means to engage this mode. Tesla clearly states that its auto pilot mode means that a driver is still in control and responsible for the vehicle:

“[Auto pilot] is an assist feature that requires you to keep your hands on the steering wheel at all times,” … you need to maintain control and responsibility for your vehicle while using it. Additionally, every time that auto pilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.”

What Tesla is saying is that they’re not ready to hand over any responsibility or autonomy to machines. The human is still very much ‘in the loop’ with autopilot.

I suspect the law will need to wrangle over what auto pilot means in the context of road transport and cars as opposed to auto pilot in aviation; this history has been traced bt Madeleine Elish and Tim Hwang. They write that they “observe a counter intuitive focus on human responsibility even while human action is increasingly replaced by automation.”

There is a historical tendency to ‘praise the machine and punish the human’ for accidents and errors. Their recommendation is for a reframing of the question of accountability to increase the web of actors and their agencies rather than just vehicle and driver. They “propose that the debate around liability and autonomous systems be reframed more precisely to reflect the agentive role of designers and engineers and the new and unique kinds of human action attendant to autonomous systems. The advent of commercially available autonomous vehicles, like the driverless car, presents an opportunity to reconfigure regimes of liability that reflect realities of informational asymmetry between designers and consumers.”

This is so important and yet I find it difficult to see how, even as a speculative exercise, how you’d get Elon Musk to acknowledge his own role or that of his engineers and developers in accountability mechanisms. It will be interesting to watch how this plays out in the American legal system, because eventually there are going to have to be laws that acknowledge shared responsibility between humans and machines, just as robots needs to be regulated differently from humans.

The Problem with Trolleys at re:publica

I gave my first talk about ethics and driverless cars for a non-specialist audience at re:publica 2016. In this I look at the problem with the Trolley Problem, the thought experiment being used to train machine learning algorithms in driverless cars. Here, I focus on the problem that logic-based notions of ethics has transformed into an engineering problem; and suggest that this ethics-as-engineering approach is what will allow for American law and insurance companies to assign blame and responsibility in the inevitable case of accidents. There is also the tension that machines are assumed to be correct, except when they aren’t, and that this sits in a difficult history of ‘praising machines’ and ‘punishing humans’ for accidents and errors. I end by talking about questions of accountability that look beyond algorithms and software themselves to the sites of production of algorithms themselves.

Here’s the full talk.

Works cited in this talk:

1. Judith Jarvis Thompson’s 1985 paper in the Yale Law Journal,The Trolley Problem
2. Patrick Lin’s work on ethics and driverless cars. Also relevant is the work of his doctoral students at UPenn looking at applications of Blaise Pascal’s work to the “Lin Problem”
3. Madeleine Elish and Tim Hwang’s paper ‘Praise the machine! Punish the human!’ as part of the Intelligence & Autonomy group at Data & Society
4. Madeleine Elish’s paper on ‘moral crumple zones’; there’s a good talk and discussion with her on the website of the proceedings of the WeRobot 2016 event at Miami Law School.
5. Langdon Winner’s ‘Do Artifacts Have Politics’
6. Bruno Latour’s Actor Network Theory.

Experience E

How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:

“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”

Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?

Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell[1]:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.

This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)

[1] http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/?utm_source=medium&utm_medium=post&utm_content=chapterlink&utm_campaign=republish

Past present: Revisiting the past through documentary

When you’re curating a program for yourself at an event or conference you’re often doing so consciously and conscientiously: there are things you need to see or attend for work, or for something new you need to wrap your head around. Then there are those times when it seems like you have no agenda except for entertainment and pleasure, which doesn’t mean, however, that your curated program is serendipitious or magical. This is what this week’s Berlinale is for me. I found myself curating one part of my program with some expected resonances: three films involving female protagonists reconstructing or re-discovering the past, and in doing so visit the unstable ground between, and in the creation of, fiction and non-fiction:

1. Kate plays Christine. Robert Greene. 2016.
In 1974 in Sarasota, Florida, a 29 year old newscaster, Christine Chubbuck, shot herself, fatally, on live TV. In 2015, an actress, Kate Lyn Sheil, prepares to recreate that moment and the film follows her journey.

2. A Magical Substance Flows Into Me. Jumana Manna. 2016
In the 1930s, Robert Lachmann, a German, had a radio show featuring “Oriental” i.e Palestinian music. In 2014, Jumana Manna, a Palestinian artist, travels around Israel and Palestine playing recordings from the old shows and recording contemporary versions. What do these songs sound like now when performed by Moroccan, Kurdish, or Yemenite Jews, by Samaritans, members of the urban and rural Palestinian communities, Bedouins and Coptic Christians?

3. The Watermelon Woman. Cheryl Dunye. 1996
Cheryl is a young black woman working at a video store. She becomes curious about black women playing stereotypical ‘mammys’ in films from the 1930s and 1940s. She sets out to discover one who is known only as the Watermelon Woman, a black lesbian actress who had an affair with a white woman director….

I’m excited to see them all and will be writing about them here..