Tag Archives: big data

Four things to hold on to according to Antoinette Rouvroy

A few months ago I was at a workshop in Brussels, on the side of which Antoinette Rouvroy and Seda Gürses were invited to speak. They both said really important things about their work: algorithmic governmentality, and Why Are We Talking About The Cloud Now?, respectively.

I asked Rouvroy about what resistance looks like in the face of narratives of big data that appear totalizing (the narratives about, and big data, both appear totalizing). What are the things that escape digitalisation? She said that there is a tendency of life to be recalcitrant to organisation, and these things:

– Physical things: the fact of bodies and organic life, which are wholly unpredictable;
– Utopias we had/have that don’t find a place in any present
– Dreams of the future.
– If we were really present and complete, we would not talk to each other: we are separated from ourselves through language anyway; trying to find a way back is resistance.

“Algorithmic thinking is tempting because it precludes hesitation, doubt, and failure; failure is a space to hold on to.”

Experience E

How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:

“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”

Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?

Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell[1]:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.

This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)

[1] http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/?utm_source=medium&utm_medium=post&utm_content=chapterlink&utm_campaign=republish

A crisis of ethics in quantified environments

On Friday, October 30th, I presented my new doctoral work to a small group of scholars and engaged political people who came together for an evening event around CIHR’s Fellows’ Day. This post is a summary of some of the ideas discussed there.

*

Every time someone says “but what about the ethics of…” they’re often referring to a personal architecture of how right and wrong stack up, or of how they think accountability must be pursued; or merely to surface the outrageous, or the potentially criminal or harmful. Then this personal morality is applied to ethical crises and termed “the ethics of” without necessarily applying any ethical rules to it. It’s a combination of truthiness and a sense of fairplay, and if you actually work on info-tech issues, perhaps a little more awareness of the stakes, positions and laws. My doctoral work is about developing a new conceptual framework with which to think about what ethics are in quantified environments [1].

Most of us can identify the crises in quantified environments – breaches, hacks, leaks, privacy violations, the possible, future implications of devolving control to autonomous or semi-autonomous vehicles – and these result in moral questions.And everyone has a different moral approach to these things and yet there is an attempt to appeal to some universal logic of safety, well-being, care and accountability. I argue that this is near impossible. Carolyn Culbertson is reflecting on the development of ethics in the work of Judith Butler and says what I’m trying to more eloquently:

“Our beginning-points in ethics are, for the most part, not simply our own. And to the extent that they are, we should want to question how effective these foundations will be in guiding our actions and relationships with others in a world that is even less one’s own. Moral philosophy—and I use that term broadly to mean the way that we think through how best to live our lives—is always in some sense culturally and historically situated. This fact haunts the universalist aspirations of moral philosophy—again, broadly understood—which aims to come up with, if not moral absolutes, at least moral principles that are not merely private idiosyncrasies.” [2]

I argue that that human, moral reasoning cannot be directly mapped onto resolving ethical crises in quantified and autonomous environments because of the size, numbers of actors, complexity,dynamism and plastic nature of these environments. How can ethics (by which I’m mostly referring to consequential, virtue ethics, deontological approaches; although there are others most derive from these) based on individual moral responsibility and virtues manifest and be applicable to distributed networks of human [agency, intention, affect and action], post-human and software/machine?

Ethics are expected to be broad, overarching, resilient guidelines based on norms repeatedly practiced and universally applied. But attempting to achieve this in quantified environments results in what I’m referring to as a A crisis of ethics. This crisis of ethics is the beginning of a new conceptual and methodological approach for how to think about the place and work of ethics in quantified environments, not an indefensible set of ethical values for quantified environments. I will start fleshing out these crises: of consciousness, of care, accountability and of uncertainty. There may be others.

Yet, the feminist philosophers ask: why these morals and ethics anyway? What makes ethics and moral reasoning from patriarchal, Western Judeo-Christian codifications in religion and the law valid? What is the baggage of these approaches and is it possible to escape the Church and the Father? What are the ethics that develop through affect? Is there an ethics in notions of collectivities, distributon, trust, sharing? I’m waiting to dive into the work of ethicists and philosophers like Sara Ahmed and Judith Butler (for starters) to find out. And, as Alex Galloway may say, the ethics are made by the protocols, not by humans. What then? (Did I say I was going to do this in three years?)

*

More updates as and when. I’m happy to talk or participate in events; and share the details of empirical work after April 2016. This is a part-time PhD at the Institute of Culture and Aesthetics of Media at Leuphana University in Lüneburg, Germany. I continue to work full-time at Tactical Tech.

Notes:

[1] ‘Big data’ is a term that has general, widespread use and familiarity, however its ubiquity also makes it opaque. The word ‘big’ is misleading for it tends to indicate size or speed, which are features of this phenomenon but do not reveal anything about how it came to be either large or fast. ‘Data’ is equally misleading because nearly every technical part of the internet runs on the creation, modification, exchange of data. There is nothing about the phrase ‘big data’ that tells us what it really is. So use the terms ‘quantification’ and ‘quantified environments’ interchangeably with ‘big data’. ‘Quantified environment’ refers to a specific aspect of digital environments I.e quantification, which is made possible through specific technology infrastructures, business and legal arrangements that are both visible and invisible. The use of the phrase ‘quantification’ also indicates a subtle but real shift to the ‘attention economy’ where every single digital action is quantified within an advertising driven business model. There ‘QE’ is also an entry into discussing the social, political, technical, infrastructural aspects of digital ecosystems through specific case studies.

[2] Culbertson, Carolyn (2013). The ethics of relationality: Judith Butler and social critique. Continental Philosophy Review (2013) 46:449–463