Category Archives: data

Machine Research @ Transmediale

The results of the Machine Research workshop from back in October were launched at Transmediale: the zine, and a studio talk.

During the workshop, we explored the use of various writing machines and ways in which research has become machine-like. The workshop questioned how research is bound to the reputation economy and profiteering of publishing companies, who charge large amounts of money to release texts under restrictive conditions. Using Free, Libre, and Open Source collaboration tools, Machine Research participants experimented with collective notetaking, transforming their contributions through machine authoring scripts and a publishing tool developed by Sarah Garcin. (The image accompanying this post is a shot of the PJ, or Publication Jockey, with some text it laid out on a screen in the back). The print publication, or ‘zine, was launched at transmediale is one result of this process. You can read the zine online

The studio talk brought together one half of our research group that talked about’infrastructures’. Listen to it here: (I’m speaking at 44:09)

New: Privacy, Visibility, Anonymity: Dilemmas in Tech Use by Marginalised Communities

I started this Tactical Tech project two years ago and am thrilled to see it finally out. Research takes time! This is a synthesis report of two case studies we did in Kenya and South Africa on risks and barriers faced by marginalised communities in using technology (primarily in transparency and accountability work). You can download the report on the Open Docs IDS website here

Experience E

How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:

“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”

Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?

Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell[1]:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.

This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)

[1] http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/?utm_source=medium&utm_medium=post&utm_content=chapterlink&utm_campaign=republish

The algorithms of ethics. (And puppycats)

I’m trying to remember when I first heard the phrase ‘the ethics of algorithms’ (TEoA) and why it bothers me. It sounded like something from a branding exercise triumph.TEoA has dogged me; it has been the intellectual equivalent of an adorable little puppy that snaps at your ankles in encouragement to play, and then opens its eyes wide to melt you with love and fake neediness, saying take me home please, Mommy. (Maybe I’m referring to a cat and not a dog; perhaps only cats and toddlers are capable of such machinations? A puppy-cat!). The ‘ethics of algorithms’ rolls off the tongue nicely, sounds important and meaningful, and captures the degrees of concern and outrage we feel about the powerful role that computer algorithms have in society, and will continue to.

I recently started a DPhil (a kind of PhD) in big data and ethics (longer version here), so I’m somewhat invested in the phrase TEoA because that is what I’m often asked if my work is about. It isn’t. However there are people working on the ethics of algorithms and the good people at CIHR recently published a paper on it which I think you should read right after you finish reading my post, because the paper is a good description of the way algorithms work in our quantified society. I’m not referring to any of these things in my work however. What I’m working on is the algorithms of ethics. By this I mean that I’m going to think about the ethics first and understand how they work, where they come from, and what that ethics [can] mean in the context of big data.

The reason why I think TEoA is a cute but needy puppycat as described above: too atomising, deterministic even, and an outcome rather than a starting point. Why would you start with the algorithm, which is the outcome of long chains of technical, scientific, legal, economic events, and not with a point earlier on in the process of its development? I think the focus on the outcome, the algorithm, is also indicative of how we think of ethics as outcomes, rather than a series of processes, negotiations. Or the fact that we think about ethics as outcomes has led to a focus on algorithms. Both, possibly.

I don’t think algorithms have ethics; people have ethics. Algorithms govern perhaps, they make decisions, but I don’t think they have or make ethics. Of course saying ‘the ethics of algorithms’ isn’t to be taken literally; it’s the assertion that algorithms that make decisions have been programmed (to learn) how to do so because of humans (who have the capacity for ethical reasoning). However the phrase is misleading because it seems like algorithms are in fact making ethical decisions. At the same time, algorithms cannot function without making some kind of judgment (not moral judgments; though algorithms can do things that can have moral implications), it wouldn’t be able to proceed to the next step if not. But does this amount to ethics? I suppose it depends on what ethics you’re subscribing to, but I’d say no

‘Ethics of algorithms’ could also refer to the ethical features or properties of algorithms, not the ethics that algorithms are assumed to produce. Kraemer, van Overveld and Peterson have a paper on it here which is based on medical imaging analysis. This work suggests that algorithms have value judgments baked into them, their functioning but concludes the ethics is the domain of systems design(ers) and that users should have more control in the outcomes of algorithmic functioning.

My work begins with the hypothesis that principles based on classical ethics (like the oft-quoted Trolley Problem in the context of autonomous vehicles, something that I believe was developed so that journalists could write their stories) are not really appropriate to big data environments (I refer to this as a crisis of ethics), and to come up with alternate approaches to thinking about ethics. Along the way I hope to develop methods to study ethics in quantified environments, not just come up with “the answer”. (Thankfully, this is a humanities PhD so there is no “right answer”). I’m also pretty sure I will have many new puppycats snapping at my ankles excited to play.

Post script.
I have also discovered that there is a strange hybrid creature called PuppyCat Here is a weird animated video with puppycats

bee_and_puppycat_by_project_gammaray-d6d5n3l
(image from anyimage.info)

A crisis of ethics in quantified environments

On Friday, October 30th, I presented my new doctoral work to a small group of scholars and engaged political people who came together for an evening event around CIHR’s Fellows’ Day. This post is a summary of some of the ideas discussed there.

*

Every time someone says “but what about the ethics of…” they’re often referring to a personal architecture of how right and wrong stack up, or of how they think accountability must be pursued; or merely to surface the outrageous, or the potentially criminal or harmful. Then this personal morality is applied to ethical crises and termed “the ethics of” without necessarily applying any ethical rules to it. It’s a combination of truthiness and a sense of fairplay, and if you actually work on info-tech issues, perhaps a little more awareness of the stakes, positions and laws. My doctoral work is about developing a new conceptual framework with which to think about what ethics are in quantified environments [1].

Most of us can identify the crises in quantified environments – breaches, hacks, leaks, privacy violations, the possible, future implications of devolving control to autonomous or semi-autonomous vehicles – and these result in moral questions.And everyone has a different moral approach to these things and yet there is an attempt to appeal to some universal logic of safety, well-being, care and accountability. I argue that this is near impossible. Carolyn Culbertson is reflecting on the development of ethics in the work of Judith Butler and says what I’m trying to more eloquently:

“Our beginning-points in ethics are, for the most part, not simply our own. And to the extent that they are, we should want to question how effective these foundations will be in guiding our actions and relationships with others in a world that is even less one’s own. Moral philosophy—and I use that term broadly to mean the way that we think through how best to live our lives—is always in some sense culturally and historically situated. This fact haunts the universalist aspirations of moral philosophy—again, broadly understood—which aims to come up with, if not moral absolutes, at least moral principles that are not merely private idiosyncrasies.” [2]

I argue that that human, moral reasoning cannot be directly mapped onto resolving ethical crises in quantified and autonomous environments because of the size, numbers of actors, complexity,dynamism and plastic nature of these environments. How can ethics (by which I’m mostly referring to consequential, virtue ethics, deontological approaches; although there are others most derive from these) based on individual moral responsibility and virtues manifest and be applicable to distributed networks of human [agency, intention, affect and action], post-human and software/machine?

Ethics are expected to be broad, overarching, resilient guidelines based on norms repeatedly practiced and universally applied. But attempting to achieve this in quantified environments results in what I’m referring to as a A crisis of ethics. This crisis of ethics is the beginning of a new conceptual and methodological approach for how to think about the place and work of ethics in quantified environments, not an indefensible set of ethical values for quantified environments. I will start fleshing out these crises: of consciousness, of care, accountability and of uncertainty. There may be others.

Yet, the feminist philosophers ask: why these morals and ethics anyway? What makes ethics and moral reasoning from patriarchal, Western Judeo-Christian codifications in religion and the law valid? What is the baggage of these approaches and is it possible to escape the Church and the Father? What are the ethics that develop through affect? Is there an ethics in notions of collectivities, distributon, trust, sharing? I’m waiting to dive into the work of ethicists and philosophers like Sara Ahmed and Judith Butler (for starters) to find out. And, as Alex Galloway may say, the ethics are made by the protocols, not by humans. What then? (Did I say I was going to do this in three years?)

*

More updates as and when. I’m happy to talk or participate in events; and share the details of empirical work after April 2016. This is a part-time PhD at the Institute of Culture and Aesthetics of Media at Leuphana University in Lüneburg, Germany. I continue to work full-time at Tactical Tech.

Notes:

[1] ‘Big data’ is a term that has general, widespread use and familiarity, however its ubiquity also makes it opaque. The word ‘big’ is misleading for it tends to indicate size or speed, which are features of this phenomenon but do not reveal anything about how it came to be either large or fast. ‘Data’ is equally misleading because nearly every technical part of the internet runs on the creation, modification, exchange of data. There is nothing about the phrase ‘big data’ that tells us what it really is. So use the terms ‘quantification’ and ‘quantified environments’ interchangeably with ‘big data’. ‘Quantified environment’ refers to a specific aspect of digital environments I.e quantification, which is made possible through specific technology infrastructures, business and legal arrangements that are both visible and invisible. The use of the phrase ‘quantification’ also indicates a subtle but real shift to the ‘attention economy’ where every single digital action is quantified within an advertising driven business model. There ‘QE’ is also an entry into discussing the social, political, technical, infrastructural aspects of digital ecosystems through specific case studies.

[2] Culbertson, Carolyn (2013). The ethics of relationality: Judith Butler and social critique. Continental Philosophy Review (2013) 46:449–463

Word of the Week: Heteromation

In the past month I’ve read two reviews of Nicholas Carr’s The Glass Cage, which is, loosely, about how automation is de-skilling and de-humanising us.The first is a review by Evgeny Morozov  an unusual piece of writing, mostly for the reflective tone he takes. Morozov’s review is actually about the place of technology criticism and a call to politics. It’s a thoughtful piece and I enjoyed it.

There’s a particular kind of article about tech that sets my teeth on edge; the kind that sounds the death-knell for X or Y thing, proclaims the end/beginning; the eschatological kind. And isn’t it odd how eschatology and scatology sound similar. Actually it isn’t odd at all, because they share the same etymological root; eschatology arrives from the Greek for ‘out’ and intestines:  “seems to be derived from ἐξ (eks, out). Compare ἔγκατα (énkata, intestines)” (from here). This is what the other review by by Sue Halpern does in taking the ‘robots and algorithms [are] taking over’ line .

There’s an assumption built into this, of our separation from machines, code, algorithms, circuitry, hardware. Us and them. The narrative of machines taking over, de-skilling and ejecting humans from their jobs presents a scenario that is too black-and-white for me; it doesn’t look at ways in which human labour is exploited by how we produce a commodity called data that results in the eventual accumulation of wealth in the hands of a few companies and individuals. It ignores the reality that we are already a little hybrid, already a little cyborg and closely connected to our machines and algorithms. While Halpern does look at American unemployment rates over decades and is fairly measured, she ends with “We, the people, are on our own here—though if the AI developers have their way, not for long.”

So, the word of the week is heteromation, which is about those unfashionable things like labour, politics and social criticism, which discussions about tech sometimes forget are there, as Morozov laments.

You’d want to steer clear of most things prefixed by ‘hetero’ but heteromation, a new-ish concept, I think, is not one of them. Proposed by Hamid Ekbia and Bonie Nardi (2014), heteromation is about how automation does not necessarily erase labour or jobs, but rather, displaces them elsewhere.

The perspective of heteromation examines the social dynamics, forces and power relations that underlie how labour is divided between machines and humans, and why we come to believe that humans can do certain kinds of digital labour, and machines others. If automation was the first phase, of the machine taking centre-stage, and augmentation being another where the machine ‘comes to the rescue’, then heteromation is a third phase where ‘the machine calls for help’.

Through case studies of Mechanical Turk, citizen science projects like Fold It and video games, Ekbia and Nardi describe what heteromation is and how it works.  In the gaming industry, players sit on tribunals to police other players’ behaviours and respond to complaints, because responding to every email about players’ toxic behaviour is just not possible. Mechanical Turk, a product from Amazon, is a clearinghouse of humans tagging images, transcribing audio snippets and other such ‘data janitorial’ tasks for online services. Jeff Bezos’ infamous line describing Mechanical Turk was ‘you’ve heard as software-as-a-service; this is humans-as-a-service.’ On low-paid, temporary and short term contracts, Mechanical Turkers and other data janitors work in incredibly precarious conditions that would seem appalling if they were in a factory:

“Employers, therefore, must consider employees as functionaries in “an algorithmic system,” forcing the labor relation even further along the path of ruthless objectification than Ford or Taylor could have imagined. Those humans rendered as bits of algorithmic function disappear into relations with oblivious employers “on autopilot.” Workers are largely “invisible,””

Another recent, excellent read on digital labour is from the The New Inquiry that goes into some of the history and present of it. Here’s some lines from that piece called The Ladies Vanish (which starts with a killer story about a Google employee who was fired for asking questions about the women who scan the books in for Google Books):

“..almost 70% of mechanical turkers were women. How shocking: the low prestige, invisible, poorly paid jobs on the internet are filled by women. Women provide the behind the scenes labor that is mystified as the work of computers, unglamorous work transformed into apparent algorithmic perfection…The computer itself is a feminized item. The history of the computer is the history of unappreciated female labor hidden behind “technology,” a screen (a literal screen) erected by boy geniuses.”

There is still place for politics and social criticism Mr. Morozov, I guess it’s not the Nicholas Carrs who are doing it.