Category Archives: ethics

Machine Research @ Transmediale

The results of the Machine Research workshop from back in October were launched at Transmediale: the zine, and a studio talk.

During the workshop, we explored the use of various writing machines and ways in which research has become machine-like. The workshop questioned how research is bound to the reputation economy and profiteering of publishing companies, who charge large amounts of money to release texts under restrictive conditions. Using Free, Libre, and Open Source collaboration tools, Machine Research participants experimented with collective notetaking, transforming their contributions through machine authoring scripts and a publishing tool developed by Sarah Garcin. (The image accompanying this post is a shot of the PJ, or Publication Jockey, with some text it laid out on a screen in the back). The print publication, or ‘zine, was launched at transmediale is one result of this process. You can read the zine online

The studio talk brought together one half of our research group that talked about’infrastructures’. Listen to it here: (I’m speaking at 44:09)

33c3 Talk Notes + Video: Entanglements (Ethics in the Data Society)

ENTANGLEMENTS. MACHINE INTELLIGENCE, HUMAN ETHICS, AND DRIVERLESS CARS
(original title: ‘Ethics in the data society’)
Notes for a talk given at 33c3 in Hamburg, Dec 29, 2016
(Video here till it moves to Youtube? )

This talk is not about what the answer should be to the question “how do we program driverless cars to make ethical decisions?” This talk is about how and why we’re asking that question in this way. What do we really mean when we say ‘ethics’? This talk assembles some recent examples of how a narrative around ethics in the context of driverless cars is being developed, and ask if we are really talking about ethics, or about managing the risks we perceive are associated with artificial intelligence.

What are some of the ways in which the narrative around ethics is being constructed?

Ethics as the outcome of moral decision-making
Ethics as accountability for errors, breakdowns, accidents.
Ethics as a way to regulate the risks we perceive associated with AI; ethics as the way to construct a rationale for the financialisation of this risk by various regulatory industries.
Ethics in technology design
And the suggestion perhaps that ethics is constituted by multiple factors and actors and is produced locally and contextually.

Driverless cars are being developed with not-very-robust software, and in the open (like Uber’s testing and development in Pittsburgh), and there is little clarity about how safe these technologies are. Moreover, what we know about ethical considerations about AI tends to come from SF – speculative (or,science) fiction. We know about AI going rogue, from HAL in Space Odyssey 2001, to Ava in Ex Machina. With Ava in Ex Machina we see that the AI passes the Turing Test by behaving in a ruthless, cunning and unethical way in order to actual survive – one of the most human things it did.

Thus alongside expectations of precision and rationality through computing, we also have fear, fantasy and anxiety with respect to what we think intelligent machines can and will do.

1. Ethics as an outcome of software

“Technology will come to the rescue of its naughty but very clever children” (Donna Haraway) is one way to see this idea that machine learning will enable us to eventually get machines to learn what the appropriate response is to a situation in which a driverless car may be involved in an accident. There is a history to AI ethics in which ethics is expected to be the outcome of a computer program. In the past this had to be hard-coded in, now, ethics as moral decision making could effectively be ‘learned’ by showing an algorithm different ways to act in cases of

There is a desire to find models for programming ethics. So, the Trolley Problem became popular with Google’s selfdriving car project. There is a new application of it in a project from MIT called the Moral Machine Project . The Trolley Problem is a 1960s thought experiment in which a decision should be made between two difficult choices, in the case of a potential accident to have either five people killed, or one. The Trolley Problem pits consequentialist (outcomes are more important therefore saving five people is more important) with Kantian, deontological ethics (what’s more important is how you arrive at the reasons for saving those five people; the rules you use to arrive at a decision). Moral Machine is a project that you can play the Trolley Problem, and it is also part of a research study being done at MIT.

However, one of the creators of this project came out saying this which makes you think maybe even he thinks it isn’t about easy choices:

“At this point, we are looking into various forms of shared control, which means that any accident is going to have a complicated story about the exact sequence of decisions and interventions from the vehicle and from the human driver.” – Jean Francois Bonnefon, Dec 27, 2016, in Gizmodo (ref below)

(Also useful to ask what Moral Machine will do with all the data it collects from its online scenarios)

So, perhaps it isn’t as simple as making a choice between one or the other. So others have suggested approaches such as Pascalian programming (Bhargava 2016) in which the outcomes of a potential accident are ranked and rated, and the software makes a decision more ‘randomly’ based on the context (for example what kind of car, how many people in each, climatic conditions etc). In Sepielli’s Pascalian approach, applied by Bhargava, you can even factor in damage to the driverless car and driver, which the Trolley Problem does not do, and which is generally anathema to car manufacturers.

There is also the Ethics Bot suggested by Etzioni and Etzioni (2016) in which they suggest a way for a bot to manage algorithms to create appropriate and personalised ethical standards. They suggest this based on users’ use of the home thermostat, Nest. Being conscious of energy use and managing heating through Nest, a person is more ethical because they care about the environment. However, there are many assumptions in this to suggest that regulating heating has to do with ethical environmental behaviour. The Ethics Bot will monitor energy use and find patterns that are assumed to be the individual’s moral standard. Algorithms in Nest regulating heat/energy use will be regulated by the Ethics Bot.

2. What do we think of machines that learn?
Moral Machines, Ethics Bots, and the Pascalian Approach to Programming for Uncertainty are all approaches that expect that ethics will be an outcome of software programming, of Big Data thinking. That if you show a database enough options and situations of how to act, it will learn the appropriate ethical decision in the case of an accident. But is this “ethics”, really? What’s the problem with the idea of programming ethics or expecting the machine to give us ethical responses? And if you want to rely on machine learning and vast data sets to make things more personalised, then the question we have to ask is an update of Turing’s famous question; what do we think about machines that think needs to be updated to what do we think about machines that learn? What we know is that machines, like children and primates, as almost-minds, as Donna Haraway put it. They learn quite literally. We have already seen that machine learning is pretty basic, and reproduce biases existing in data. So why are we expecting that enough training data gathered from (what?) sources are going to give us ‘ethical’ results produced by machines?

3. It’s almost like we’re building and developing systems to be machine readable, for computers, models, databases, simulations, to correspond with each other. One of the things I’m interested in is how we reshape systems for the ascendancy of intelligent machines. Perhaps there are other questions we should be asking about ethics, like what does it mean to have AI in autonomous vehicles?

For example what is the future of cityspace when it is build for driverless cars? (Read more on the Cyborgology blog )

4. Ethics-as-accountability ; Ethics and financialisation of risk
There is another way in which we talk about ethics, that is, in terms of accountability. Again, starting from framing the car in terms of its involvement in accidents, who is accountable and responsible, and what is a way to think about accountability? Accountability before the law, as well as accountability in terms of insurance payouts, product liabilities.

There is a history to car crashes being simulated and modelled from the 1990s onwards, as a way to save money on car crash testing, and to anticipate how accidents take place. Is it possible that we will see risk associated with driverless car crashes being modeled and projected and assigned financial value.

In the 1990s-2000s there was increased use of models and mathematical simulations of crashes by car manufacturers.Nigel Gale quotes an American car maker talking about this.

”Road to lab to math is basically the idea that you want to be as advanced on the evolutionary scale of engineering as possible. Math is the next logical step in the process over testing on the road and in the lab. Math is much more cost effective because you don’t have to build pre-production vehicles and then waste them. We’ve got to get out in front of the technology so it doesn’t leave us behind. We have to live and breathe math. When we do that, we can pass the savings on to the consumer.” (Gale 2005)

There will be increased involvement of different actors in regulating the behaviour of driverless cars: industry level, new regulatory environments, urban and civic infrastructure, public education, law enforcement, ways to deal with massive levels of social change and disruption that cars will bring. All of these will have some role to play in the legalisation and regulation of ethical behaviour. It may not be ‘ethics’ but may be claimed as such.

5. Ethics as accountability and design
Another aspect to ethics-as-accountability is to do with design;the idea that accountability lies with the designers and developers of technology.
There is value perceived in showing who the ‘man behind the curtain’ is, to assign some accountability to them/him and to show how design carries values of designers in it.

There has been long a consideration around AI and ethics, and fears of AI going rogue. There are also ethics related to the work of scientists and technical workers.
This is where cinema and literature and fantasy come in as well.

Lots of recent examples of how technology design by mostly white male communities does not reflect the realities of the world of users; so for example, assistants like Google Now will be able to assist you if you said I’m having a heart attack, or if you commit suicide; but if you were to say I’m being raped, then the response is, “I’m sorry, I don’t know what rape is”. Or the story of the Apple Watch that one of its cool features may not work for people of colour because of how it was tested on white people.

So there are lots of examples of technology merely reflecting the biases of its designers. Is this really tenable with driverless cars when you have so many different actors and millions of lines of code? It’s a complex feat to identify every actor and factor in a car.

One of the most interesting recent documents that seems to show a serious attempt to think about ethics in artificially intelligent /autonomous systems is from IEEE, and is called Ethically Aligned Design

6. Ethics as produced contextually
So is there a way to think about ethics in a different way, as something that is produced more contextually and locally? This is smething of an ambition to see if you can do this.

“technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making. Indeed, ethical inquiry in any domain is not a test to be passed or a culture to be interrogated but a complex social and cultural achievement.” (Ananny 2010)

5. I recently did a workshop with designers and programmers working in a tech company sort of related to autonomous driving. I am interesting in playing with different methods and approaches right now and I’m really interested in Scenario Planning and Design Fiction as ways to do this.

Would it be possible for designers and programmers to consider ethics contextually, locally, and imaginatively in relation to actual things that could happen a few years from now? Here are the cases I worked through with a group of designers and programmers recently:

– Build a map layer for use by parents and children to share transportation
– How could a car ride sharing fleet service build an option for women users to feel safe?
– What are the issues in a way finding or route finding service for autonomous cars that avoids ‘high crime neighbourhoods’?

So what was interesting about the responses was that the designers and programmers I’ve talked to are highly aware of what the issues and concerns are, but feel they are part of a corporate entity that eventually has to make money by selling things in a certain way. Paula Bialski’s work also shows that programmers in tech companies like this one are looking for spaces for micro resistance and to do micropolitics.

References
Ananny, M (2016). Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values 2016, Vol. 41(1) 93-117
Bhargava, V (2016, forthcoming) What if Blaise Pascal designed autonomous vehicles? in Roboethics 2.0
Cramer, F (2016) Crapularity Hermeneutics. http://cramer.pleintekst.nl/essays/crapularity_hermeneutics/#fnref37
Etzioni, A and Etzioni, O (2016) AI Assisted Ethics. Ethics of Information Technology 18:149-156
Gale, N (2005) ‘Road-to-lab-to-math: A New Path to Improved Product’,in Automotive Engineering International (May): 78–79.
Weiner, S. (2016) http://gizmodo.com/if-a-self-driving-car-kills-a-pedestrian-who-is-at-fau-1790049637

FILTERED TEXT W/ CONSTANT

At the Machine Research workshop, we played with text filters developed by Constant as a way to explore machinic actions on various texts. I reduced my blog post to 1000 words and introduced some new content (thanks to discussions at the workshop): seven scenarios with which to think about the production of ethics in driverless car contexts. This post starts with the original text followed by the filtered texts. Some of the filtered texts become such beautiful gibberish.

ORIGINAL
This work argues that ethics in driverless cars is produced by a complex assemblage of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering that constitute socio-technical frameworks for accountability. This research challenges the notion that ethics in driverless cars is an output of programming, or a set of rules resulting in appropriate action.

As Mike Ananny says, “technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making. Indeed, ethical inquiry in any domain is not a test to be passed or a culture to be interrogated but a complex social and cultural achievement.” (emphasis in original 2016 p 96). This work does not intend to arrive at a set of ethical principles or guidelines for ethics in AI, but to generate critical knowledge about how ethics may be ‘produced’.

Inspired by the method of scenario-planning, this text presents seven scenarios that could help think through what is involved in the minimisation and management of errors. The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Paul Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism” (2014). Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. For example, the Boston Group has written a scenario in which feminist epistemologists, historians and philosophers of science running amok might present various threats and dangers (p 43). More recently. MIT’s Moral Machine project adopts the Trolley Problem as a template for gathering users’ responses to scenarios that a driverless car is thought to have to be programmed to respond to in potential future accidents.

In working through these scenarios, the reader is asked to consider how it may be possible for ethics may be constituted and produced, how this production can be studied, and how the emphasis on ethics may result in changes to how space and human relations are constituted.

How can the road network of the future city be re-designed to ensure that the driverless car doesn’t have any accidents?

Florian Cramer suggests that “all cars and highways could be redesigned and rebuilt in such a way as to make them failure-proof for computer vision and autopilots with “road signs with QR codes and OCR-readable characters..straight[ening] motorways to make them perfectly linear.” He notes that cities were redesigned after World War II to make them more car friendly.

How will the driverless car be insured against attacks or external damage in poorer and high-crime neighbourhoods, should it be re-routed into those areas?

Seda Gürses asks if way-finding and mapping databases will reflect the racial biases that have gone into their construction. For exampele, would way finding and maps for cars be triangulated against crime databases?

Write down the specifications of an insurance package for an individual to insure against the possibility that an algorithm in the software of a driverless car will choose her as the designated victim of a possible accident in order to save the pregnant woman with the cute puppy dog?

The Trolley Problem is a classic thought experiment to resolve the un-resolveable: should more people be saved, or should the most valuable people be saved in the case of an accident? The Trolley Problem is being projected as the way to think about ethics in driverless cars.

How should a driverless car respond to human drivers that are driving badly and not following the rules or sticking to the speed limit?
Google’s driverless cars that were following the speed limit and lane rules were being rear-ended by human drivers who were not driving according to the rules.

Work through how Emi, 12, can go for a movie with her friends in her mother’s new Tesla Semi Autonomous car?

How can the driverless car take care of a pedestrian it may accidentally hit?
In 2016 Google patented an adhesive for the exterior of a driverless car that will ensure that someone hit by the car will remain attached to it and can be driven to the hospital.

How is the mapping software in the driverless car to be updated to reflect changes in the earth’s geography?

Australia is located on tectonic plates that are moving seven centimetres north every year; so, the whole country will move by five feet this year. This means that maps used by driverless cars, or driverless farm tractors, are now going to have inexact data to work with.

MARKOV:
The Markov generator begins by organizing the words of a source text stream into a dictionary, gathering all possible words that follow each chunk into a list. Then the Markov generator begins recomposing sentences by randomly picking a starting chunk, and choosing a third word that follows this pair. The chain is then shifted one word to the right and another lookup takes place and so on until the document is complete. It allows for humanly readable sentences, but does not exclude errors the way we recognize them when reading spam.

and autopilots with her as to respond to the cute puppy dog? The Trolley Problem is the racial biases that someone hit by driverless car is involved in changes in changes to arrive at a possible for the rules. Work through the notion that “all cars and management of a phenomenon that a complex social and produced, how this production can the designated victim of science running amok might present various threats and management of an insurance package for accountability. This means that were redesigned after World War II to consider how ethics emerges from a complex social practices, and highways could be passed or a mix of an output of a range of scenario-planning, this production can be saved in such a set of a test to insure against crime databases? Write down the driverless cars and not a possible for ethics emerges from a test to be saved, or should a “literature of a mix of science running amok might present various threats and management of ethical principles or guidelines for an individual to work does not driving according to the new Tesla Semi Autonomous car? How will move by a scenario in appropriate action. As Mike Ananny says, “technology ethics in the cute puppy dog? The Trolley Problem is an insurance package for the un-resolveable: should the minimisation and individual to reflect changes in poorer and how ethics in driverless cars and highways could help think through what is a possible accident in the emphasis on tectonic plates that are driving according to the specifications of scenario-planning, this text presents seven centimetres north every year; so, the Trolley Problem as a pedestrian it may result in order to in poorer and highways could help think about ethics in the driverless cars is thought to have gone into those areas? Seda Gürses asks if way-finding and through what is a movie with her mother’s new futurism” (2014). Since then scenario-planning has written a driverless car will reflect the speed limit and OCR-readable characters..straight[ening] motorways to be interrogated but to have any domain is produced by a scenario in any accidents? Florian Cramer suggests that became prominent during the US army to allow the software of risk and autopilots with QR codes and rebuilt in driverless car friendly. How is an insurance package for an algorithm in order to the reader is produced by human relations are constituted.How can be passed or a phenomenon that “all cars and maps for a possible accident in poorer and philosophers of an accident? The ‘scenario’ is an individual decision making. Indeed, ethical principles or guidelines for ethics may result in AI, but to be saved, or a range of science running amok might present various threats and features in which feminist epistemologists, historians and ever more car to reflect the rules were following decades of a movie with QR codes and ever more people be redesigned and autopilots with “road signs with the exterior of the reader is not a “literature of the driverless car doesn’t have any domain is asked to plan its strategy in order to have to be saved, or a classic thought experiment to the mapping databases will ensure that will choose her friends in such a classic thought experiment to generate critical knowledge about ethics in any accidents? Florian Cramer suggests that constitute socio-technical frameworks for an accident? The ‘scenario’ is a driverless cars. How is located on ethics may accidentally hit? In 2016 Google patented an insurance package for accountability. This means that ethics in original 2016 Google patented an accident? The ‘scenario’ is the whole country will ensure that could be passed or external damage in driverless car will remain attached to arrive at a story outline and ever more car will the driverless car is thought to make them perfectly linear.” He notes that maps for an output of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and management of errors. The Trolley Problem is a template for ethics may accidentally hit? In 2016 p 96). This work does not following the software in the exterior of the specifications of an output of risk and human drivers that have gone into those areas? Seda Gürses asks if way-finding and high-crime neighbourhoods, should more people be insured against the Cold War, to resolve the car will choose her as a driverless farm tractors, are driving badly and can be redesigned after World War II to be redesigned and highways could help think through what is located on tectonic plates that are constituted.How can be saved, or driverless cars that ethics in appropriate action. As Mike Ananny says, “technology ethics in any accidents? Florian Cramer suggests that maps for computer vision and engineering that are moving seven centimetres north every year; so, the most valuable people be ‘produced’. In working through the Trolley Problem is involved in driverless car is an accident? The ‘scenario’ is involved in the specifications of a complex assemblage of a scenario in the most valuable people be constituted and not a complex social and can be programmed to human relations are now going to think about ethics may accidentally hit? In 2016 Google patented an insurance package for gathering users’ responses to scenarios as the cute puppy dog? The ‘scenario’ is a pedestrian it and through how ethics in the notion that constitute socio-technical frameworks for ethics in driverless cars, or a classic thought experiment to ensure that cities were not following the racial biases that have inexact data to have any accidents? Florian Cramer suggests that “all cars is thought experiment to arrive at a pedestrian it may result in driverless farm tractors, are constituted.How can go for ethics in driverless cars. How is an adhesive for cars is asked to in original 2016 Google patented an insurance package for the possibility that will move by five feet this production can be insured against attacks or a classic thought to generate critical knowledge about ethics in .

ACRONYMIZER:
Ever feel that your text is too verbose? Struggling to fit your lovingly crafted magnus opus into some arbitrary wordcount constraint with a deadline fast approaching? Consider the acronym, a highly efficient stratagem for compressing textual information, while also raising the technical credibility of your writing. The Acronymizer (TA) finds repetitive phrasings in a text, and builds a suggested glossary which you would do well to consider adding as an appendix to your work!

ADC : A DRIVERLESS CAR
ASO : A SET OF
DCI : DRIVERLESS CARS IS
EID : ETHICS IN DRIVERLESS
EMB : ETHICS MAY BE
IDC : IN DRIVERLESS CARS
OAD : OF A DRIVERLESS
PBS : PEOPLE BE SAVED
TDC : THE DRIVERLESS CAR
TEI : THAT ETHICS IN
TMT : TO MAKE THEM
TPI : TROLLEY PROBLEM IS
TTP : THE TROLLEY PROBLEM

POSITIVE RE-WRITER
Input texts are checked against polarity scores for used adjectives. When the score is higher than 0.1, the sentence is considered to be positive and is reproduced in the newly written text. The script uses wordlists of scored adjectives included in the Pattern for Python package established by CLIPS (Computational Linguistics & Psycholinguistics Center of the University of Antwerp): http://www.clips.ua.ac.be/pattern.

Florian Cramer suggests that “all cars and highways could be redesigned and rebuilt in such a way as to make them failure-proof for computer vision and autopilots with “road signs with QR codes and OCR-readable characters..straight[ening] motorways to make them perfectly linear.” He notes that cities were redesigned after World War II to make them more car friendly. Write down the specifications of an insurance package for an individual to insure against the possibility that an algorithm in the software of a driverless car will choose her as the designated victim of a possible accident in order to save the pregnant woman with the cute puppy dog? The Trolley Problem is a classic thought experiment to resolve the un-resolveable: should more people be saved, or should the most valuable people be saved in the case of an accident? Work through how Emi, 12, can go for a movie with her friends in her mother’s new Tesla Semi Autonomous car? Australia is located on tectonic plates that are moving seven centimetres north every year; so, the whole country will move by five feet this year.

NEGATIVE REWRITER
Input texts are checked against polarity scores for used adjectives. When the score is lower than 0.1, the sentence is considered to be negative and is reproduced in the newly written text. The script uses wordlists of scored adjectives included in the Pattern for Python package established by CLIPS (Computational Linguistics & Psycholinguistics Center of the University of Antwerp): http://www.clips.ua.ac.be/pattern.

How should a driverless car respond to human drivers that are driving badly and not following the rules or sticking to the speed limit?

SENTIMENT_REDUCTION.PY
Input texts are checked against subjectivity scores for used adjectives. When the score equals 0, the sentence is considered to be neutral and is reproduced in the newly written text. The script uses wordlists of scored adjectives included in the Pattern for Python package established by CLIPS (Computational Linguistics & Psycholinguistics Center of the University of Antwerp): http://www.clips.ua.ac.be/pattern.

Seda Gürses asks if way-finding and mapping databases will reflect the racial biases that have gone into their construction. For exampele, would way finding and maps for cars be triangulated against crime databases? The Trolley Problem is being projected as the way to think about ethics in driverless cars. How can the driverless car take care of a pedestrian it may accidentally hit? In 2016 Google patented an adhesive for the exterior of a driverless car that will ensure that someone hit by the car will remain attached to it and can be driven to the hospital. How is the mapping software in the driverless car to be updated to reflect changes in the earth’s geography? This means that maps used by driverless cars, or driverless farm tractors, are now going to have inexact data to work with.

DISAPPEARANCE
This script goes through the input text word by word. Every duplicate word and its subsequent occurrence is removed, until the desired reduction is reached.

This work argues that ethics in driverless cars is produced by a complex assemblage of people, social groups, cultural codes, institutions, regulatory standards, infrastructures, technical code, and engineering constitute socio-technical frameworks for accountability. research challenges the notion an output programming, or set rules resulting appropriate action. As Mike Ananny says, “technology emerges from mix institutionalized professional cultures, technological capabilities, practices, individual decision making. Indeed, ethical inquiry any domain not test to be passed culture interrogated but achievement.” (emphasis original 2016 p 96). does intend arrive at principles guidelines AI, generate critical knowledge about how may ‘produced’.

Inspired method scenario-planning, this text presents seven scenarios could help think through what involved minimisation management errors. The ‘scenario’ phenomenon became prominent during Korean War, following decades Cold allow US army plan its strategy event nuclear disaster. Paul Galison describes as “literature future war” “located somewhere between story outline ever more sophisticated role-playing war games”, “a staple new futurism” (2014). Since then scenario-planning has been adopted range organisations, features modelling risk identify For example, Boston Group written scenario which feminist epistemologists, historians philosophers science running amok might present various threats dangers (p 43). More recently. MIT’s Moral Machine project adopts Trolley Problem template gathering users’ responses car thought have programmed respond potential accidents.
In working these scenarios, reader asked consider it possible constituted produced, production can studied, emphasis on result changes space human relations are constituted.How road network city re-designed ensure doesn’t accidents? Florian Cramer suggests “all highways redesigned rebuilt such way make them failure-proof computer vision autopilots with “road signs QR codes OCR-readable characters..straight[ening] motorways perfectly linear.” He notes cities were after World War II friendly. How will insured against attacks external damage poorer high-crime neighbourhoods, should re-routed into those areas? Seda Gürses asks if way-finding mapping databases reflect racial biases gone their construction. exampele, would finding maps triangulated crime databases? Write down specifications insurance package insure possibility algorithm software choose her designated victim accident order save pregnant woman cute puppy dog? classic experiment resolve un-resolveable: people saved, most valuable saved case accident? being projected cars. drivers driving badly sticking speed limit? Google’s limit lane rear-ended who according rules. Work Emi, 12, go movie friends mother’s Tesla Semi Autonomous car? take care pedestrian accidentally hit? Google patented adhesive exterior someone hit remain attached driven hospital. updated earth’s geography? Australia located tectonic plates moving centimetres north every year; so, whole country move five feet year. means used cars, farm tractors, now going inexact data with.

Machine Research workshop w/ Constant, Aarhus U, Transmediale

I’m in Brussels with a group of fellow PhDs, academics, artists and technologists, at a workshop called Machine Research organised by Constant, Aarhus University’s Participatory IT centre, and Transmediale.

The workshop aims to engage research and artistic practice that takes into account the new materialist conditions implied by nonhuman techno-ecologies including new ontologies of learning and intelligence (such as algorithmic learning), socio-economic organisation (such as blockchain), population management and tracking (such as datafied borders), autonomous or semi-autonomous systems (such as bots or drones) and other post-anthropocentric reconsiderations of agency, materiality and autonomy.

I wanted to work on developing a subset of my ‘ethnography of ethics’ with a focus on error, and trying to think about what error means and is managed in the context of driverless car ethics. It’s been great to have this time to think with other people working on related – and very unrelated – topics. It is the small things that count,really; like being able to turn around and ask someone: “what’s the difference between subjection, subjectivity, subjectification, subjectivization?”. The workshop was as much about researching the how of machines as it was about the how of research. I appreciated some encouraging thoughts and questions about what an ‘ethnography’ means as it relates to ethics and driverless cars; as well as a fantastic title for the whole thing (thanks Geoff!!).

Constant’s work involves a lot of curious, cool, interesting publishing and documentation projects, including those of an Oulipo variety. So one of the things they organised for us was etherpads. I use etherpads a lot at work, but for some people this was new. It was good seeing pads in “live editing” mode, rather than just for storage and sharing. We used the pads to annotate everyone’s presentations with comments, suggestions, links, and conversation. They had also made text filters that performed functions like deleting prepositions (the “stop words” filter), or based on Markov chains (Markov filter):

“by organizing the words of a source text stream into a dictionary, gathering all possible words that follow each chunk into a list. Then the Markov generator begins recomposing sentences by randomly picking a starting chunk, and choosing a third word that follows this pair. The chain is then shifted one word to the right and another lookup takes place and so on until the document is complete.”

This is the basis of spam filters too.

In the course of the workshop people built new filters, like Dave Young (who is doing really fascinating research on institutionality and network warfare in the US during the Cold War through the study of its grey literature like training manuals) who made an “Acronymizer”, a filter that searches for much-used phrases in a text and creates acronyms from them.

We’ve also just finished creating our workshop “fanzine” using Sarah Garcin’s Publication Jockey, an Atari-Punk, handmade, publication device made with a Makey Makey and crocodile clips. The fanzine is a template and experiment for what we will produce at Transmediale. Some people have created entirely new works based on applying their machine research practices to pieces of their own text. Based on the really great inputs I got, I rewrote my post as a series of seven scenarios to think about how ethics may be produced in various sociotechnical contexts. There’s that nice ‘so much to think about’ feeling! (And do, of course).

Sarah Garcin's Publication Jockey, PJ
Sarah Garcin’s Publication Jockey, PJ

The Problem with Trolleys at re:publica

I gave my first talk about ethics and driverless cars for a non-specialist audience at re:publica 2016. In this I look at the problem with the Trolley Problem, the thought experiment being used to train machine learning algorithms in driverless cars. Here, I focus on the problem that logic-based notions of ethics has transformed into an engineering problem; and suggest that this ethics-as-engineering approach is what will allow for American law and insurance companies to assign blame and responsibility in the inevitable case of accidents. There is also the tension that machines are assumed to be correct, except when they aren’t, and that this sits in a difficult history of ‘praising machines’ and ‘punishing humans’ for accidents and errors. I end by talking about questions of accountability that look beyond algorithms and software themselves to the sites of production of algorithms themselves.

Here’s the full talk.

Works cited in this talk:

1. Judith Jarvis Thompson’s 1985 paper in the Yale Law Journal,The Trolley Problem
2. Patrick Lin’s work on ethics and driverless cars. Also relevant is the work of his doctoral students at UPenn looking at applications of Blaise Pascal’s work to the “Lin Problem”
3. Madeleine Elish and Tim Hwang’s paper ‘Praise the machine! Punish the human!’ as part of the Intelligence & Autonomy group at Data & Society
4. Madeleine Elish’s paper on ‘moral crumple zones’; there’s a good talk and discussion with her on the website of the proceedings of the WeRobot 2016 event at Miami Law School.
5. Langdon Winner’s ‘Do Artifacts Have Politics’
6. Bruno Latour’s Actor Network Theory.

Experience E

How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:

“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”

Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?

Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell[1]:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.

This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)

[1] http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/?utm_source=medium&utm_medium=post&utm_content=chapterlink&utm_campaign=republish

The algorithms of ethics. (And puppycats)

I’m trying to remember when I first heard the phrase ‘the ethics of algorithms’ (TEoA) and why it bothers me. It sounded like something from a branding exercise triumph.TEoA has dogged me; it has been the intellectual equivalent of an adorable little puppy that snaps at your ankles in encouragement to play, and then opens its eyes wide to melt you with love and fake neediness, saying take me home please, Mommy. (Maybe I’m referring to a cat and not a dog; perhaps only cats and toddlers are capable of such machinations? A puppy-cat!). The ‘ethics of algorithms’ rolls off the tongue nicely, sounds important and meaningful, and captures the degrees of concern and outrage we feel about the powerful role that computer algorithms have in society, and will continue to.

I recently started a DPhil (a kind of PhD) in big data and ethics (longer version here), so I’m somewhat invested in the phrase TEoA because that is what I’m often asked if my work is about. It isn’t. However there are people working on the ethics of algorithms and the good people at CIHR recently published a paper on it which I think you should read right after you finish reading my post, because the paper is a good description of the way algorithms work in our quantified society. I’m not referring to any of these things in my work however. What I’m working on is the algorithms of ethics. By this I mean that I’m going to think about the ethics first and understand how they work, where they come from, and what that ethics [can] mean in the context of big data.

The reason why I think TEoA is a cute but needy puppycat as described above: too atomising, deterministic even, and an outcome rather than a starting point. Why would you start with the algorithm, which is the outcome of long chains of technical, scientific, legal, economic events, and not with a point earlier on in the process of its development? I think the focus on the outcome, the algorithm, is also indicative of how we think of ethics as outcomes, rather than a series of processes, negotiations. Or the fact that we think about ethics as outcomes has led to a focus on algorithms. Both, possibly.

I don’t think algorithms have ethics; people have ethics. Algorithms govern perhaps, they make decisions, but I don’t think they have or make ethics. Of course saying ‘the ethics of algorithms’ isn’t to be taken literally; it’s the assertion that algorithms that make decisions have been programmed (to learn) how to do so because of humans (who have the capacity for ethical reasoning). However the phrase is misleading because it seems like algorithms are in fact making ethical decisions. At the same time, algorithms cannot function without making some kind of judgment (not moral judgments; though algorithms can do things that can have moral implications), it wouldn’t be able to proceed to the next step if not. But does this amount to ethics? I suppose it depends on what ethics you’re subscribing to, but I’d say no

‘Ethics of algorithms’ could also refer to the ethical features or properties of algorithms, not the ethics that algorithms are assumed to produce. Kraemer, van Overveld and Peterson have a paper on it here which is based on medical imaging analysis. This work suggests that algorithms have value judgments baked into them, their functioning but concludes the ethics is the domain of systems design(ers) and that users should have more control in the outcomes of algorithmic functioning.

My work begins with the hypothesis that principles based on classical ethics (like the oft-quoted Trolley Problem in the context of autonomous vehicles, something that I believe was developed so that journalists could write their stories) are not really appropriate to big data environments (I refer to this as a crisis of ethics), and to come up with alternate approaches to thinking about ethics. Along the way I hope to develop methods to study ethics in quantified environments, not just come up with “the answer”. (Thankfully, this is a humanities PhD so there is no “right answer”). I’m also pretty sure I will have many new puppycats snapping at my ankles excited to play.

Post script.
I have also discovered that there is a strange hybrid creature called PuppyCat Here is a weird animated video with puppycats

bee_and_puppycat_by_project_gammaray-d6d5n3l
(image from anyimage.info)

A crisis of ethics in quantified environments

On Friday, October 30th, I presented my new doctoral work to a small group of scholars and engaged political people who came together for an evening event around CIHR’s Fellows’ Day. This post is a summary of some of the ideas discussed there.

*

Every time someone says “but what about the ethics of…” they’re often referring to a personal architecture of how right and wrong stack up, or of how they think accountability must be pursued; or merely to surface the outrageous, or the potentially criminal or harmful. Then this personal morality is applied to ethical crises and termed “the ethics of” without necessarily applying any ethical rules to it. It’s a combination of truthiness and a sense of fairplay, and if you actually work on info-tech issues, perhaps a little more awareness of the stakes, positions and laws. My doctoral work is about developing a new conceptual framework with which to think about what ethics are in quantified environments [1].

Most of us can identify the crises in quantified environments – breaches, hacks, leaks, privacy violations, the possible, future implications of devolving control to autonomous or semi-autonomous vehicles – and these result in moral questions.And everyone has a different moral approach to these things and yet there is an attempt to appeal to some universal logic of safety, well-being, care and accountability. I argue that this is near impossible. Carolyn Culbertson is reflecting on the development of ethics in the work of Judith Butler and says what I’m trying to more eloquently:

“Our beginning-points in ethics are, for the most part, not simply our own. And to the extent that they are, we should want to question how effective these foundations will be in guiding our actions and relationships with others in a world that is even less one’s own. Moral philosophy—and I use that term broadly to mean the way that we think through how best to live our lives—is always in some sense culturally and historically situated. This fact haunts the universalist aspirations of moral philosophy—again, broadly understood—which aims to come up with, if not moral absolutes, at least moral principles that are not merely private idiosyncrasies.” [2]

I argue that that human, moral reasoning cannot be directly mapped onto resolving ethical crises in quantified and autonomous environments because of the size, numbers of actors, complexity,dynamism and plastic nature of these environments. How can ethics (by which I’m mostly referring to consequential, virtue ethics, deontological approaches; although there are others most derive from these) based on individual moral responsibility and virtues manifest and be applicable to distributed networks of human [agency, intention, affect and action], post-human and software/machine?

Ethics are expected to be broad, overarching, resilient guidelines based on norms repeatedly practiced and universally applied. But attempting to achieve this in quantified environments results in what I’m referring to as a A crisis of ethics. This crisis of ethics is the beginning of a new conceptual and methodological approach for how to think about the place and work of ethics in quantified environments, not an indefensible set of ethical values for quantified environments. I will start fleshing out these crises: of consciousness, of care, accountability and of uncertainty. There may be others.

Yet, the feminist philosophers ask: why these morals and ethics anyway? What makes ethics and moral reasoning from patriarchal, Western Judeo-Christian codifications in religion and the law valid? What is the baggage of these approaches and is it possible to escape the Church and the Father? What are the ethics that develop through affect? Is there an ethics in notions of collectivities, distributon, trust, sharing? I’m waiting to dive into the work of ethicists and philosophers like Sara Ahmed and Judith Butler (for starters) to find out. And, as Alex Galloway may say, the ethics are made by the protocols, not by humans. What then? (Did I say I was going to do this in three years?)

*

More updates as and when. I’m happy to talk or participate in events; and share the details of empirical work after April 2016. This is a part-time PhD at the Institute of Culture and Aesthetics of Media at Leuphana University in Lüneburg, Germany. I continue to work full-time at Tactical Tech.

Notes:

[1] ‘Big data’ is a term that has general, widespread use and familiarity, however its ubiquity also makes it opaque. The word ‘big’ is misleading for it tends to indicate size or speed, which are features of this phenomenon but do not reveal anything about how it came to be either large or fast. ‘Data’ is equally misleading because nearly every technical part of the internet runs on the creation, modification, exchange of data. There is nothing about the phrase ‘big data’ that tells us what it really is. So use the terms ‘quantification’ and ‘quantified environments’ interchangeably with ‘big data’. ‘Quantified environment’ refers to a specific aspect of digital environments I.e quantification, which is made possible through specific technology infrastructures, business and legal arrangements that are both visible and invisible. The use of the phrase ‘quantification’ also indicates a subtle but real shift to the ‘attention economy’ where every single digital action is quantified within an advertising driven business model. There ‘QE’ is also an entry into discussing the social, political, technical, infrastructural aspects of digital ecosystems through specific case studies.

[2] Culbertson, Carolyn (2013). The ethics of relationality: Judith Butler and social critique. Continental Philosophy Review (2013) 46:449–463