I’m in Brussels with a group of fellow PhDs, academics, artists and technologists, at a workshop called Machine Research organised by Constant, Aarhus University’s Participatory IT centre, and Transmediale.
The workshop aims to engage research and artistic practice that takes into account the new materialist conditions implied by nonhuman techno-ecologies including new ontologies of learning and intelligence (such as algorithmic learning), socio-economic organisation (such as blockchain), population management and tracking (such as datafied borders), autonomous or semi-autonomous systems (such as bots or drones) and other post-anthropocentric reconsiderations of agency, materiality and autonomy.
I wanted to work on developing a subset of my ‘ethnography of ethics’ with a focus on error, and trying to think about what error means and is managed in the context of driverless car ethics. It’s been great to have this time to think with other people working on related – and very unrelated – topics. It is the small things that count,really; like being able to turn around and ask someone: “what’s the difference between subjection, subjectivity, subjectification, subjectivization?”. The workshop was as much about researching the how of machines as it was about the how of research. I appreciated some encouraging thoughts and questions about what an ‘ethnography’ means as it relates to ethics and driverless cars; as well as a fantastic title for the whole thing (thanks Geoff!!).
Constant’s work involves a lot of curious, cool, interesting publishing and documentation projects, including those of an Oulipo variety. So one of the things they organised for us was etherpads. I use etherpads a lot at work, but for some people this was new. It was good seeing pads in “live editing” mode, rather than just for storage and sharing. We used the pads to annotate everyone’s presentations with comments, suggestions, links, and conversation. They had also made text filters that performed functions like deleting prepositions (the “stop words” filter), or based on Markov chains (Markov filter):
“by organizing the words of a source text stream into a dictionary, gathering all possible words that follow each chunk into a list. Then the Markov generator begins recomposing sentences by randomly picking a starting chunk, and choosing a third word that follows this pair. The chain is then shifted one word to the right and another lookup takes place and so on until the document is complete.”
This is the basis of spam filters too.
In the course of the workshop people built new filters, like Dave Young (who is doing really fascinating research on institutionality and network warfare in the US during the Cold War through the study of its grey literature like training manuals) who made an “Acronymizer”, a filter that searches for much-used phrases in a text and creates acronyms from them.
We’ve also just finished creating our workshop “fanzine” using Sarah Garcin’s Publication Jockey, an Atari-Punk, handmade, publication device made with a Makey Makey and crocodile clips. The fanzine is a template and experiment for what we will produce at Transmediale. Some people have created entirely new works based on applying their machine research practices to pieces of their own text. Based on the really great inputs I got, I rewrote my post as a series of seven scenarios to think about how ethics may be produced in various sociotechnical contexts. There’s that nice ‘so much to think about’ feeling! (And do, of course).