Tag Archives: ethics

“Imagining Ethics”: Testing out SF as a method

Back in early April on a trip to the Theorizing the Web conference in New York, two artists-in-residence at New Inc, Stephanie Dinkins and Francis Tseng, invited me to test out something at their monthly “AI Assembly”.

I believe that it is difficult for everyday users to understand and make sense of digital technologies; and specialists like computer scientists or lawyers can be restricted by their disciplinary training in being able to see the ways in which technology and society interact.

Yet, I think we have to inspire a wider conversation about digital literacies given the present and future of ubiquitous computing and artificial intelligence. From data breaches, to the complex decision-making expected of machine learning, what are the ways in which people may conceptualise values and norms for regulating human-machine relationships in a near future?

There are a number of methods to map out the social-political-economic dimensions of future scenarios, and they’re commonly used across different fields in the automotive industry. (Mathematical modeling for predicting crashes has in fact been around since the 1980s). I’ve also been thinking about using SF (speculative fiction, science fiction, speculative feminism, science fabulation, string figuring: Donna Haraway expands SF beyond ‘science fiction’) as a way of telling stories about power, society,and technology.

Inspired by these, I’m curious about the imagination, and the role that imaginaries play in shaping and articulating how people think about a near future with machines and technology. I believe that ‘socio-technical imaginaries’, a concept developed by Sheila Jasanoff and Sang-Hyung Kim, underlie and shape the development of technologies in society, may be an interesting theoretical framework to adopt and adapt. I’m trying to find a way to bring these elements together, and the New Inc experiment is part of that.

Here’s more about all this on the Cyborgology blog here.

Machine Research @ Transmediale

The results of the Machine Research workshop from back in October were launched at Transmediale: the zine, and a studio talk.

During the workshop, we explored the use of various writing machines and ways in which research has become machine-like. The workshop questioned how research is bound to the reputation economy and profiteering of publishing companies, who charge large amounts of money to release texts under restrictive conditions. Using Free, Libre, and Open Source collaboration tools, Machine Research participants experimented with collective notetaking, transforming their contributions through machine authoring scripts and a publishing tool developed by Sarah Garcin. (The image accompanying this post is a shot of the PJ, or Publication Jockey, with some text it laid out on a screen in the back). The print publication, or ‘zine, was launched at transmediale is one result of this process. You can read the zine online

The studio talk brought together one half of our research group that talked about’infrastructures’. Listen to it here: (I’m speaking at 44:09)

Machine Research workshop w/ Constant, Aarhus U, Transmediale

I’m in Brussels with a group of fellow PhDs, academics, artists and technologists, at a workshop called Machine Research organised by Constant, Aarhus University’s Participatory IT centre, and Transmediale.

The workshop aims to engage research and artistic practice that takes into account the new materialist conditions implied by nonhuman techno-ecologies including new ontologies of learning and intelligence (such as algorithmic learning), socio-economic organisation (such as blockchain), population management and tracking (such as datafied borders), autonomous or semi-autonomous systems (such as bots or drones) and other post-anthropocentric reconsiderations of agency, materiality and autonomy.

I wanted to work on developing a subset of my ‘ethnography of ethics’ with a focus on error, and trying to think about what error means and is managed in the context of driverless car ethics. It’s been great to have this time to think with other people working on related – and very unrelated – topics. It is the small things that count,really; like being able to turn around and ask someone: “what’s the difference between subjection, subjectivity, subjectification, subjectivization?”. The workshop was as much about researching the how of machines as it was about the how of research. I appreciated some encouraging thoughts and questions about what an ‘ethnography’ means as it relates to ethics and driverless cars; as well as a fantastic title for the whole thing (thanks Geoff!!).

Constant’s work involves a lot of curious, cool, interesting publishing and documentation projects, including those of an Oulipo variety. So one of the things they organised for us was etherpads. I use etherpads a lot at work, but for some people this was new. It was good seeing pads in “live editing” mode, rather than just for storage and sharing. We used the pads to annotate everyone’s presentations with comments, suggestions, links, and conversation. They had also made text filters that performed functions like deleting prepositions (the “stop words” filter), or based on Markov chains (Markov filter):

“by organizing the words of a source text stream into a dictionary, gathering all possible words that follow each chunk into a list. Then the Markov generator begins recomposing sentences by randomly picking a starting chunk, and choosing a third word that follows this pair. The chain is then shifted one word to the right and another lookup takes place and so on until the document is complete.”

This is the basis of spam filters too.

In the course of the workshop people built new filters, like Dave Young (who is doing really fascinating research on institutionality and network warfare in the US during the Cold War through the study of its grey literature like training manuals) who made an “Acronymizer”, a filter that searches for much-used phrases in a text and creates acronyms from them.

We’ve also just finished creating our workshop “fanzine” using Sarah Garcin’s Publication Jockey, an Atari-Punk, handmade, publication device made with a Makey Makey and crocodile clips. The fanzine is a template and experiment for what we will produce at Transmediale. Some people have created entirely new works based on applying their machine research practices to pieces of their own text. Based on the really great inputs I got, I rewrote my post as a series of seven scenarios to think about how ethics may be produced in various sociotechnical contexts. There’s that nice ‘so much to think about’ feeling! (And do, of course).

Sarah Garcin's Publication Jockey, PJ
Sarah Garcin’s Publication Jockey, PJ

The Problem with Trolleys at re:publica

I gave my first talk about ethics and driverless cars for a non-specialist audience at re:publica 2016. In this I look at the problem with the Trolley Problem, the thought experiment being used to train machine learning algorithms in driverless cars. Here, I focus on the problem that logic-based notions of ethics has transformed into an engineering problem; and suggest that this ethics-as-engineering approach is what will allow for American law and insurance companies to assign blame and responsibility in the inevitable case of accidents. There is also the tension that machines are assumed to be correct, except when they aren’t, and that this sits in a difficult history of ‘praising machines’ and ‘punishing humans’ for accidents and errors. I end by talking about questions of accountability that look beyond algorithms and software themselves to the sites of production of algorithms themselves.

Here’s the full talk.

Works cited in this talk:

1. Judith Jarvis Thompson’s 1985 paper in the Yale Law Journal,The Trolley Problem
2. Patrick Lin’s work on ethics and driverless cars. Also relevant is the work of his doctoral students at UPenn looking at applications of Blaise Pascal’s work to the “Lin Problem”
3. Madeleine Elish and Tim Hwang’s paper ‘Praise the machine! Punish the human!’ as part of the Intelligence & Autonomy group at Data & Society
4. Madeleine Elish’s paper on ‘moral crumple zones’; there’s a good talk and discussion with her on the website of the proceedings of the WeRobot 2016 event at Miami Law School.
5. Langdon Winner’s ‘Do Artifacts Have Politics’
6. Bruno Latour’s Actor Network Theory.

Experience E

How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:

“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”

Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?

Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell[1]:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.

This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)

[1] http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/?utm_source=medium&utm_medium=post&utm_content=chapterlink&utm_campaign=republish

A crisis of ethics in quantified environments

On Friday, October 30th, I presented my new doctoral work to a small group of scholars and engaged political people who came together for an evening event around CIHR’s Fellows’ Day. This post is a summary of some of the ideas discussed there.


Every time someone says “but what about the ethics of…” they’re often referring to a personal architecture of how right and wrong stack up, or of how they think accountability must be pursued; or merely to surface the outrageous, or the potentially criminal or harmful. Then this personal morality is applied to ethical crises and termed “the ethics of” without necessarily applying any ethical rules to it. It’s a combination of truthiness and a sense of fairplay, and if you actually work on info-tech issues, perhaps a little more awareness of the stakes, positions and laws. My doctoral work is about developing a new conceptual framework with which to think about what ethics are in quantified environments [1].

Most of us can identify the crises in quantified environments – breaches, hacks, leaks, privacy violations, the possible, future implications of devolving control to autonomous or semi-autonomous vehicles – and these result in moral questions.And everyone has a different moral approach to these things and yet there is an attempt to appeal to some universal logic of safety, well-being, care and accountability. I argue that this is near impossible. Carolyn Culbertson is reflecting on the development of ethics in the work of Judith Butler and says what I’m trying to more eloquently:

“Our beginning-points in ethics are, for the most part, not simply our own. And to the extent that they are, we should want to question how effective these foundations will be in guiding our actions and relationships with others in a world that is even less one’s own. Moral philosophy—and I use that term broadly to mean the way that we think through how best to live our lives—is always in some sense culturally and historically situated. This fact haunts the universalist aspirations of moral philosophy—again, broadly understood—which aims to come up with, if not moral absolutes, at least moral principles that are not merely private idiosyncrasies.” [2]

I argue that that human, moral reasoning cannot be directly mapped onto resolving ethical crises in quantified and autonomous environments because of the size, numbers of actors, complexity,dynamism and plastic nature of these environments. How can ethics (by which I’m mostly referring to consequential, virtue ethics, deontological approaches; although there are others most derive from these) based on individual moral responsibility and virtues manifest and be applicable to distributed networks of human [agency, intention, affect and action], post-human and software/machine?

Ethics are expected to be broad, overarching, resilient guidelines based on norms repeatedly practiced and universally applied. But attempting to achieve this in quantified environments results in what I’m referring to as a A crisis of ethics. This crisis of ethics is the beginning of a new conceptual and methodological approach for how to think about the place and work of ethics in quantified environments, not an indefensible set of ethical values for quantified environments. I will start fleshing out these crises: of consciousness, of care, accountability and of uncertainty. There may be others.

Yet, the feminist philosophers ask: why these morals and ethics anyway? What makes ethics and moral reasoning from patriarchal, Western Judeo-Christian codifications in religion and the law valid? What is the baggage of these approaches and is it possible to escape the Church and the Father? What are the ethics that develop through affect? Is there an ethics in notions of collectivities, distributon, trust, sharing? I’m waiting to dive into the work of ethicists and philosophers like Sara Ahmed and Judith Butler (for starters) to find out. And, as Alex Galloway may say, the ethics are made by the protocols, not by humans. What then? (Did I say I was going to do this in three years?)


More updates as and when. I’m happy to talk or participate in events; and share the details of empirical work after April 2016. This is a part-time PhD at the Institute of Culture and Aesthetics of Media at Leuphana University in Lüneburg, Germany. I continue to work full-time at Tactical Tech.


[1] ‘Big data’ is a term that has general, widespread use and familiarity, however its ubiquity also makes it opaque. The word ‘big’ is misleading for it tends to indicate size or speed, which are features of this phenomenon but do not reveal anything about how it came to be either large or fast. ‘Data’ is equally misleading because nearly every technical part of the internet runs on the creation, modification, exchange of data. There is nothing about the phrase ‘big data’ that tells us what it really is. So use the terms ‘quantification’ and ‘quantified environments’ interchangeably with ‘big data’. ‘Quantified environment’ refers to a specific aspect of digital environments I.e quantification, which is made possible through specific technology infrastructures, business and legal arrangements that are both visible and invisible. The use of the phrase ‘quantification’ also indicates a subtle but real shift to the ‘attention economy’ where every single digital action is quantified within an advertising driven business model. There ‘QE’ is also an entry into discussing the social, political, technical, infrastructural aspects of digital ecosystems through specific case studies.

[2] Culbertson, Carolyn (2013). The ethics of relationality: Judith Butler and social critique. Continental Philosophy Review (2013) 46:449–463