A MECS-ian

I just started a six month part-time fellowship at the MECS – Media Cultures of Computer Simulation – Institute for Advanced Study at my home institution, Leuphana University, in Lüneburg, Germany. Here is what I’m working on there. I’m in great company with some really smart PhDs and Post-Docs and am really happy to be part of this community.

Winter 2018 Writing

Aside from being on a treadmill of writing proposals and applications for money/workshops/summer institutes/fellowships, January and February were good months for staying indoors and writing. I wrote about big data, biometrics and Aadhaar for Cyborgology; this was a short version of the longer piece originally commissioned by Tactical Tech for their new Our Data Our Selves project.

I recently came across the Center for Humane Technology and tried to write something measured (read: stuffed my cynicism into a box) about their philosophy and practice of disconnection from the internet and what is problematic about the individual ‘responsibilization‘ for everything that is wrong with the information economy. And followed that up drawing on Sarah Sharma’s work to think through the CFHT’s construction of ‘time well spent’.

Also, winter was a time for sowing seeds. More to come on all that.

The world in itself, without us: Extraction, Infrastructure & Tech

The Planetary Futures Summer School was a gift in terms of material to chew on and write about. I had a post up on Cyborgology about the visit to the Malartic Gold Mine.

“A mine is a complex space of flows” says Dr. Mostafa Benzaazoua.

I’m not expecting a professor of geological engineering to use a phrase from the media studies cannon. I write in my notebook” “maybe media studies before mining science?!!!” Or perhaps that phrase has now entered into everyday scholarly parlance. Over the course of the next few hours, Dr. Benzaazoua gives us a detail-rich lecture on how gold is mined from the earth, and the spaces of flows the mine and its products inhabit. The next day we leave before dawn to visit Canada’s largest open pit gold mine.

When you visit an open pit gold mine, it takes time for your eyes to adjust to the grayscale landscape. More lunar than Luxor, you don’t see anything even remotely golden at a gold mine, except perhaps the cheesy gold hard hats (we) visitors wear. We are watching the open pit of the mine from a viewing gallery many metres away and above it; it is very, very quiet here. You expect to hear something, but we’re too far away to hear the machines drill the earth and bring up rocks, which are loaded into large trucks. Each truck has eight wheels, each wheel costs $42,000 and is about ten feet high. The trucks lumber about like friendly, giant worker-animals. To drive them requires significant skill; we are told that women make better drivers. The trucks take the rocks away to the factory where they are analysed for gold.

Someone says something later about the mine being cyborg: the organic Earth, with its transformative automated elements – the drilling machines, trucks, – and the ‘intra-action’ of the two being the mine itself.

Read more here.

Exit: Moon. A Short Story About Staying With The Trouble

I spent two weeks in August at Planetary Futures, a summer school at Concordia University in Montreal organised by Orit Halpern, Marie-Pier Boucher, and Pierre Louis Patoine, and hosted at Milieux, the centre for art, technology, and culture at Concordia. The course made links between planetary scale catastrophe and the Anthropocene, and histories of infrastructure, colonialism, and investigated design and fiction as vehicles of speculation about the future(s). It brought

“together the disciplines of the arts, humanities, social sciences, and sciences to collectively investigate this question of how we shall inhabit the world in the face of the current ecological crisis and to rethink concepts and practices of environment, ecology, difference, and technology to envision, and create, a more just, sustainable, and diverse planet. The course will include field visits to extraction sites, energy infrastructures, earth science installations, and speculative architecture and design projects.”

Continue reading Exit: Moon. A Short Story About Staying With The Trouble

“Imagining Ethics”: Testing out SF as a method

Back in early April on a trip to the Theorizing the Web conference in New York, two artists-in-residence at New Inc, Stephanie Dinkins and Francis Tseng, invited me to test out something at their monthly “AI Assembly”.

I believe that it is difficult for everyday users to understand and make sense of digital technologies; and specialists like computer scientists or lawyers can be restricted by their disciplinary training in being able to see the ways in which technology and society interact.

Yet, I think we have to inspire a wider conversation about digital literacies given the present and future of ubiquitous computing and artificial intelligence. From data breaches, to the complex decision-making expected of machine learning, what are the ways in which people may conceptualise values and norms for regulating human-machine relationships in a near future?

There are a number of methods to map out the social-political-economic dimensions of future scenarios, and they’re commonly used across different fields in the automotive industry. (Mathematical modeling for predicting crashes has in fact been around since the 1980s). I’ve also been thinking about using SF (speculative fiction, science fiction, speculative feminism, science fabulation, string figuring: Donna Haraway expands SF beyond ‘science fiction’) as a way of telling stories about power, society,and technology.

Inspired by these, I’m curious about the imagination, and the role that imaginaries play in shaping and articulating how people think about a near future with machines and technology. I believe that ‘socio-technical imaginaries’, a concept developed by Sheila Jasanoff and Sang-Hyung Kim, underlie and shape the development of technologies in society, may be an interesting theoretical framework to adopt and adapt. I’m trying to find a way to bring these elements together, and the New Inc experiment is part of that.

Here’s more about all this on the Cyborgology blog here.

Accident Tourist on Cyborgology

Crossposting my latest piece on Cyborgology.
Accident Tourist: Driverless car crashes, ethics, and machine learning is an essay that attempts to unpack a particular narrative of ethics that has been constructed around driverless car technology. In this, I show that ethics has been constructed as an outcome of machine-learning software rather than developed as a framework of values. How can we read this ethics-as-software in the case of crashes, such as the Tesla crash from May 2016? How does ethics play out in determining accountability and responsibility (for car crashes), which I claim is a powerful force determining the construction of ethics in AI. Looking at the history of accountability for aviation crashes, I conclude that the notion of accountability in AI cannot be output- or outcome-driven, but should instead encompass the entanglements between machine and human agents working together.

Machine Research @ Transmediale

The results of the Machine Research workshop from back in October were launched at Transmediale: the zine, and a studio talk.

During the workshop, we explored the use of various writing machines and ways in which research has become machine-like. The workshop questioned how research is bound to the reputation economy and profiteering of publishing companies, who charge large amounts of money to release texts under restrictive conditions. Using Free, Libre, and Open Source collaboration tools, Machine Research participants experimented with collective notetaking, transforming their contributions through machine authoring scripts and a publishing tool developed by Sarah Garcin. (The image accompanying this post is a shot of the PJ, or Publication Jockey, with some text it laid out on a screen in the back). The print publication, or ‘zine, was launched at transmediale is one result of this process. You can read the zine online

The studio talk brought together one half of our research group that talked about’infrastructures’. Listen to it here: (I’m speaking at 44:09)

Four things to hold on to according to Antoinette Rouvroy

A few months ago I was at a workshop in Brussels, on the side of which Antoinette Rouvroy and Seda Gürses were invited to speak. They both said really important things about their work: algorithmic governmentality, and Why Are We Talking About The Cloud Now?, respectively.

I asked Rouvroy about what resistance looks like in the face of narratives of big data that appear totalizing (the narratives about, and big data, both appear totalizing). What are the things that escape digitalisation? She said that there is a tendency of life to be recalcitrant to organisation, and these things:

– Physical things: the fact of bodies and organic life, which are wholly unpredictable;
– Utopias we had/have that don’t find a place in any present
– Dreams of the future.
– If we were really present and complete, we would not talk to each other: we are separated from ourselves through language anyway; trying to find a way back is resistance.

“Algorithmic thinking is tempting because it precludes hesitation, doubt, and failure; failure is a space to hold on to.”

2016. Writing, Publishing.

I wrote different kinds of things this past year. Here they are from most recent.

Rounded off the year with the daily beat ‘Reflected‘ about visitors and special guests in the #glassroom in New York

What is the city when it is made for autonomous vehicles with AI? Over @cyborgology

Started writing for @cyborgology about cinema,cybernetics,automation & cars about a visit to a BMW car factory

Published @Info_Activism ‘s digisec research abt digisec trainers& security in context by Carol Waters and Becky Kazansky

Privacy, visibility,anonymity:Dilemmas in activists’ tech use. New publication from me, @jsdeutch @schultjen

2016 started with the White Room @Nervous Systems:Quantified Life&the Social Question (text not available online)

33c3 Talk Notes + Video: Entanglements (Ethics in the Data Society)

(original title: ‘Ethics in the data society’)
Notes for a talk given at 33c3 in Hamburg, Dec 29, 2016
(Video here till it moves to Youtube? )

This talk is not about what the answer should be to the question “how do we program driverless cars to make ethical decisions?” This talk is about how and why we’re asking that question in this way. What do we really mean when we say ‘ethics’? This talk assembles some recent examples of how a narrative around ethics in the context of driverless cars is being developed, and ask if we are really talking about ethics, or about managing the risks we perceive are associated with artificial intelligence.

What are some of the ways in which the narrative around ethics is being constructed?

Ethics as the outcome of moral decision-making
Ethics as accountability for errors, breakdowns, accidents.
Ethics as a way to regulate the risks we perceive associated with AI; ethics as the way to construct a rationale for the financialisation of this risk by various regulatory industries.
Ethics in technology design
And the suggestion perhaps that ethics is constituted by multiple factors and actors and is produced locally and contextually.

Driverless cars are being developed with not-very-robust software, and in the open (like Uber’s testing and development in Pittsburgh), and there is little clarity about how safe these technologies are. Moreover, what we know about ethical considerations about AI tends to come from SF – speculative (or,science) fiction. We know about AI going rogue, from HAL in Space Odyssey 2001, to Ava in Ex Machina. With Ava in Ex Machina we see that the AI passes the Turing Test by behaving in a ruthless, cunning and unethical way in order to actual survive – one of the most human things it did.

Thus alongside expectations of precision and rationality through computing, we also have fear, fantasy and anxiety with respect to what we think intelligent machines can and will do.

1. Ethics as an outcome of software

“Technology will come to the rescue of its naughty but very clever children” (Donna Haraway) is one way to see this idea that machine learning will enable us to eventually get machines to learn what the appropriate response is to a situation in which a driverless car may be involved in an accident. There is a history to AI ethics in which ethics is expected to be the outcome of a computer program. In the past this had to be hard-coded in, now, ethics as moral decision making could effectively be ‘learned’ by showing an algorithm different ways to act in cases of

There is a desire to find models for programming ethics. So, the Trolley Problem became popular with Google’s selfdriving car project. There is a new application of it in a project from MIT called the Moral Machine Project . The Trolley Problem is a 1960s thought experiment in which a decision should be made between two difficult choices, in the case of a potential accident to have either five people killed, or one. The Trolley Problem pits consequentialist (outcomes are more important therefore saving five people is more important) with Kantian, deontological ethics (what’s more important is how you arrive at the reasons for saving those five people; the rules you use to arrive at a decision). Moral Machine is a project that you can play the Trolley Problem, and it is also part of a research study being done at MIT.

However, one of the creators of this project came out saying this which makes you think maybe even he thinks it isn’t about easy choices:

“At this point, we are looking into various forms of shared control, which means that any accident is going to have a complicated story about the exact sequence of decisions and interventions from the vehicle and from the human driver.” – Jean Francois Bonnefon, Dec 27, 2016, in Gizmodo (ref below)

(Also useful to ask what Moral Machine will do with all the data it collects from its online scenarios)

So, perhaps it isn’t as simple as making a choice between one or the other. So others have suggested approaches such as Pascalian programming (Bhargava 2016) in which the outcomes of a potential accident are ranked and rated, and the software makes a decision more ‘randomly’ based on the context (for example what kind of car, how many people in each, climatic conditions etc). In Sepielli’s Pascalian approach, applied by Bhargava, you can even factor in damage to the driverless car and driver, which the Trolley Problem does not do, and which is generally anathema to car manufacturers.

There is also the Ethics Bot suggested by Etzioni and Etzioni (2016) in which they suggest a way for a bot to manage algorithms to create appropriate and personalised ethical standards. They suggest this based on users’ use of the home thermostat, Nest. Being conscious of energy use and managing heating through Nest, a person is more ethical because they care about the environment. However, there are many assumptions in this to suggest that regulating heating has to do with ethical environmental behaviour. The Ethics Bot will monitor energy use and find patterns that are assumed to be the individual’s moral standard. Algorithms in Nest regulating heat/energy use will be regulated by the Ethics Bot.

2. What do we think of machines that learn?
Moral Machines, Ethics Bots, and the Pascalian Approach to Programming for Uncertainty are all approaches that expect that ethics will be an outcome of software programming, of Big Data thinking. That if you show a database enough options and situations of how to act, it will learn the appropriate ethical decision in the case of an accident. But is this “ethics”, really? What’s the problem with the idea of programming ethics or expecting the machine to give us ethical responses? And if you want to rely on machine learning and vast data sets to make things more personalised, then the question we have to ask is an update of Turing’s famous question; what do we think about machines that think needs to be updated to what do we think about machines that learn? What we know is that machines, like children and primates, as almost-minds, as Donna Haraway put it. They learn quite literally. We have already seen that machine learning is pretty basic, and reproduce biases existing in data. So why are we expecting that enough training data gathered from (what?) sources are going to give us ‘ethical’ results produced by machines?

3. It’s almost like we’re building and developing systems to be machine readable, for computers, models, databases, simulations, to correspond with each other. One of the things I’m interested in is how we reshape systems for the ascendancy of intelligent machines. Perhaps there are other questions we should be asking about ethics, like what does it mean to have AI in autonomous vehicles?

For example what is the future of cityspace when it is build for driverless cars? (Read more on the Cyborgology blog )

4. Ethics-as-accountability ; Ethics and financialisation of risk
There is another way in which we talk about ethics, that is, in terms of accountability. Again, starting from framing the car in terms of its involvement in accidents, who is accountable and responsible, and what is a way to think about accountability? Accountability before the law, as well as accountability in terms of insurance payouts, product liabilities.

There is a history to car crashes being simulated and modelled from the 1990s onwards, as a way to save money on car crash testing, and to anticipate how accidents take place. Is it possible that we will see risk associated with driverless car crashes being modeled and projected and assigned financial value.

In the 1990s-2000s there was increased use of models and mathematical simulations of crashes by car manufacturers.Nigel Gale quotes an American car maker talking about this.

”Road to lab to math is basically the idea that you want to be as advanced on the evolutionary scale of engineering as possible. Math is the next logical step in the process over testing on the road and in the lab. Math is much more cost effective because you don’t have to build pre-production vehicles and then waste them. We’ve got to get out in front of the technology so it doesn’t leave us behind. We have to live and breathe math. When we do that, we can pass the savings on to the consumer.” (Gale 2005)

There will be increased involvement of different actors in regulating the behaviour of driverless cars: industry level, new regulatory environments, urban and civic infrastructure, public education, law enforcement, ways to deal with massive levels of social change and disruption that cars will bring. All of these will have some role to play in the legalisation and regulation of ethical behaviour. It may not be ‘ethics’ but may be claimed as such.

5. Ethics as accountability and design
Another aspect to ethics-as-accountability is to do with design;the idea that accountability lies with the designers and developers of technology.
There is value perceived in showing who the ‘man behind the curtain’ is, to assign some accountability to them/him and to show how design carries values of designers in it.

There has been long a consideration around AI and ethics, and fears of AI going rogue. There are also ethics related to the work of scientists and technical workers.
This is where cinema and literature and fantasy come in as well.

Lots of recent examples of how technology design by mostly white male communities does not reflect the realities of the world of users; so for example, assistants like Google Now will be able to assist you if you said I’m having a heart attack, or if you commit suicide; but if you were to say I’m being raped, then the response is, “I’m sorry, I don’t know what rape is”. Or the story of the Apple Watch that one of its cool features may not work for people of colour because of how it was tested on white people.

So there are lots of examples of technology merely reflecting the biases of its designers. Is this really tenable with driverless cars when you have so many different actors and millions of lines of code? It’s a complex feat to identify every actor and factor in a car.

One of the most interesting recent documents that seems to show a serious attempt to think about ethics in artificially intelligent /autonomous systems is from IEEE, and is called Ethically Aligned Design

6. Ethics as produced contextually
So is there a way to think about ethics in a different way, as something that is produced more contextually and locally? This is smething of an ambition to see if you can do this.

“technology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making. Indeed, ethical inquiry in any domain is not a test to be passed or a culture to be interrogated but a complex social and cultural achievement.” (Ananny 2010)

5. I recently did a workshop with designers and programmers working in a tech company sort of related to autonomous driving. I am interesting in playing with different methods and approaches right now and I’m really interested in Scenario Planning and Design Fiction as ways to do this.

Would it be possible for designers and programmers to consider ethics contextually, locally, and imaginatively in relation to actual things that could happen a few years from now? Here are the cases I worked through with a group of designers and programmers recently:

– Build a map layer for use by parents and children to share transportation
– How could a car ride sharing fleet service build an option for women users to feel safe?
– What are the issues in a way finding or route finding service for autonomous cars that avoids ‘high crime neighbourhoods’?

So what was interesting about the responses was that the designers and programmers I’ve talked to are highly aware of what the issues and concerns are, but feel they are part of a corporate entity that eventually has to make money by selling things in a certain way. Paula Bialski’s work also shows that programmers in tech companies like this one are looking for spaces for micro resistance and to do micropolitics.

Ananny, M (2016). Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values 2016, Vol. 41(1) 93-117
Bhargava, V (2016, forthcoming) What if Blaise Pascal designed autonomous vehicles? in Roboethics 2.0
Cramer, F (2016) Crapularity Hermeneutics. http://cramer.pleintekst.nl/essays/crapularity_hermeneutics/#fnref37
Etzioni, A and Etzioni, O (2016) AI Assisted Ethics. Ethics of Information Technology 18:149-156
Gale, N (2005) ‘Road-to-lab-to-math: A New Path to Improved Product’,in Automotive Engineering International (May): 78–79.
Weiner, S. (2016) http://gizmodo.com/if-a-self-driving-car-kills-a-pedestrian-who-is-at-fau-1790049637

Machine Learning For Girl Gangs