Category Archives: tech

The algorithms of ethics. (And puppycats)

I’m trying to remember when I first heard the phrase ‘the ethics of algorithms’ (TEoA) and why it bothers me. It sounded like something from a branding exercise triumph.TEoA has dogged me; it has been the intellectual equivalent of an adorable little puppy that snaps at your ankles in encouragement to play, and then opens its eyes wide to melt you with love and fake neediness, saying take me home please, Mommy. (Maybe I’m referring to a cat and not a dog; perhaps only cats and toddlers are capable of such machinations? A puppy-cat!). The ‘ethics of algorithms’ rolls off the tongue nicely, sounds important and meaningful, and captures the degrees of concern and outrage we feel about the powerful role that computer algorithms have in society, and will continue to.

I recently started a DPhil (a kind of PhD) in big data and ethics (longer version here), so I’m somewhat invested in the phrase TEoA because that is what I’m often asked if my work is about. It isn’t. However there are people working on the ethics of algorithms and the good people at CIHR recently published a paper on it which I think you should read right after you finish reading my post, because the paper is a good description of the way algorithms work in our quantified society. I’m not referring to any of these things in my work however. What I’m working on is the algorithms of ethics. By this I mean that I’m going to think about the ethics first and understand how they work, where they come from, and what that ethics [can] mean in the context of big data.

The reason why I think TEoA is a cute but needy puppycat as described above: too atomising, deterministic even, and an outcome rather than a starting point. Why would you start with the algorithm, which is the outcome of long chains of technical, scientific, legal, economic events, and not with a point earlier on in the process of its development? I think the focus on the outcome, the algorithm, is also indicative of how we think of ethics as outcomes, rather than a series of processes, negotiations. Or the fact that we think about ethics as outcomes has led to a focus on algorithms. Both, possibly.

I don’t think algorithms have ethics; people have ethics. Algorithms govern perhaps, they make decisions, but I don’t think they have or make ethics. Of course saying ‘the ethics of algorithms’ isn’t to be taken literally; it’s the assertion that algorithms that make decisions have been programmed (to learn) how to do so because of humans (who have the capacity for ethical reasoning). However the phrase is misleading because it seems like algorithms are in fact making ethical decisions. At the same time, algorithms cannot function without making some kind of judgment (not moral judgments; though algorithms can do things that can have moral implications), it wouldn’t be able to proceed to the next step if not. But does this amount to ethics? I suppose it depends on what ethics you’re subscribing to, but I’d say no

‘Ethics of algorithms’ could also refer to the ethical features or properties of algorithms, not the ethics that algorithms are assumed to produce. Kraemer, van Overveld and Peterson have a paper on it here which is based on medical imaging analysis. This work suggests that algorithms have value judgments baked into them, their functioning but concludes the ethics is the domain of systems design(ers) and that users should have more control in the outcomes of algorithmic functioning.

My work begins with the hypothesis that principles based on classical ethics (like the oft-quoted Trolley Problem in the context of autonomous vehicles, something that I believe was developed so that journalists could write their stories) are not really appropriate to big data environments (I refer to this as a crisis of ethics), and to come up with alternate approaches to thinking about ethics. Along the way I hope to develop methods to study ethics in quantified environments, not just come up with “the answer”. (Thankfully, this is a humanities PhD so there is no “right answer”). I’m also pretty sure I will have many new puppycats snapping at my ankles excited to play.

Post script.
I have also discovered that there is a strange hybrid creature called PuppyCat Here is a weird animated video with puppycats

bee_and_puppycat_by_project_gammaray-d6d5n3l
(image from anyimage.info)

A crisis of ethics in quantified environments

On Friday, October 30th, I presented my new doctoral work to a small group of scholars and engaged political people who came together for an evening event around CIHR’s Fellows’ Day. This post is a summary of some of the ideas discussed there.

*

Every time someone says “but what about the ethics of…” they’re often referring to a personal architecture of how right and wrong stack up, or of how they think accountability must be pursued; or merely to surface the outrageous, or the potentially criminal or harmful. Then this personal morality is applied to ethical crises and termed “the ethics of” without necessarily applying any ethical rules to it. It’s a combination of truthiness and a sense of fairplay, and if you actually work on info-tech issues, perhaps a little more awareness of the stakes, positions and laws. My doctoral work is about developing a new conceptual framework with which to think about what ethics are in quantified environments [1].

Most of us can identify the crises in quantified environments – breaches, hacks, leaks, privacy violations, the possible, future implications of devolving control to autonomous or semi-autonomous vehicles – and these result in moral questions.And everyone has a different moral approach to these things and yet there is an attempt to appeal to some universal logic of safety, well-being, care and accountability. I argue that this is near impossible. Carolyn Culbertson is reflecting on the development of ethics in the work of Judith Butler and says what I’m trying to more eloquently:

“Our beginning-points in ethics are, for the most part, not simply our own. And to the extent that they are, we should want to question how effective these foundations will be in guiding our actions and relationships with others in a world that is even less one’s own. Moral philosophy—and I use that term broadly to mean the way that we think through how best to live our lives—is always in some sense culturally and historically situated. This fact haunts the universalist aspirations of moral philosophy—again, broadly understood—which aims to come up with, if not moral absolutes, at least moral principles that are not merely private idiosyncrasies.” [2]

I argue that that human, moral reasoning cannot be directly mapped onto resolving ethical crises in quantified and autonomous environments because of the size, numbers of actors, complexity,dynamism and plastic nature of these environments. How can ethics (by which I’m mostly referring to consequential, virtue ethics, deontological approaches; although there are others most derive from these) based on individual moral responsibility and virtues manifest and be applicable to distributed networks of human [agency, intention, affect and action], post-human and software/machine?

Ethics are expected to be broad, overarching, resilient guidelines based on norms repeatedly practiced and universally applied. But attempting to achieve this in quantified environments results in what I’m referring to as a A crisis of ethics. This crisis of ethics is the beginning of a new conceptual and methodological approach for how to think about the place and work of ethics in quantified environments, not an indefensible set of ethical values for quantified environments. I will start fleshing out these crises: of consciousness, of care, accountability and of uncertainty. There may be others.

Yet, the feminist philosophers ask: why these morals and ethics anyway? What makes ethics and moral reasoning from patriarchal, Western Judeo-Christian codifications in religion and the law valid? What is the baggage of these approaches and is it possible to escape the Church and the Father? What are the ethics that develop through affect? Is there an ethics in notions of collectivities, distributon, trust, sharing? I’m waiting to dive into the work of ethicists and philosophers like Sara Ahmed and Judith Butler (for starters) to find out. And, as Alex Galloway may say, the ethics are made by the protocols, not by humans. What then? (Did I say I was going to do this in three years?)

*

More updates as and when. I’m happy to talk or participate in events; and share the details of empirical work after April 2016. This is a part-time PhD at the Institute of Culture and Aesthetics of Media at Leuphana University in Lüneburg, Germany. I continue to work full-time at Tactical Tech.

Notes:

[1] ‘Big data’ is a term that has general, widespread use and familiarity, however its ubiquity also makes it opaque. The word ‘big’ is misleading for it tends to indicate size or speed, which are features of this phenomenon but do not reveal anything about how it came to be either large or fast. ‘Data’ is equally misleading because nearly every technical part of the internet runs on the creation, modification, exchange of data. There is nothing about the phrase ‘big data’ that tells us what it really is. So use the terms ‘quantification’ and ‘quantified environments’ interchangeably with ‘big data’. ‘Quantified environment’ refers to a specific aspect of digital environments I.e quantification, which is made possible through specific technology infrastructures, business and legal arrangements that are both visible and invisible. The use of the phrase ‘quantification’ also indicates a subtle but real shift to the ‘attention economy’ where every single digital action is quantified within an advertising driven business model. There ‘QE’ is also an entry into discussing the social, political, technical, infrastructural aspects of digital ecosystems through specific case studies.

[2] Culbertson, Carolyn (2013). The ethics of relationality: Judith Butler and social critique. Continental Philosophy Review (2013) 46:449–463

Demon-AI

 

a group of more than one hundred Silicon Valley luminaries, led by Tesla’s Elon Musk, and scientists, including the theoretical physicist Stephen Hawking, issued a call to conscience for those working on automation’s holy grail, artificial intelligence, lest they, in Musk’s words, “summon the demon.

I know it’s only a figure of speech, but it struck an odd note that scientists like Hawking and entrepreneurs like Musk likened the development of artificial intelligence to something demonic.

From here and the full interview where Musk mentions this was here but it has now been taken down because of MIT’s copyright claims.

Also, the movie Demon Seed is ace.

Word of the Week: Heteromation

In the past month I’ve read two reviews of Nicholas Carr’s The Glass Cage, which is, loosely, about how automation is de-skilling and de-humanising us.The first is a review by Evgeny Morozov  an unusual piece of writing, mostly for the reflective tone he takes. Morozov’s review is actually about the place of technology criticism and a call to politics. It’s a thoughtful piece and I enjoyed it.

There’s a particular kind of article about tech that sets my teeth on edge; the kind that sounds the death-knell for X or Y thing, proclaims the end/beginning; the eschatological kind. And isn’t it odd how eschatology and scatology sound similar. Actually it isn’t odd at all, because they share the same etymological root; eschatology arrives from the Greek for ‘out’ and intestines:  “seems to be derived from ἐξ (eks, out). Compare ἔγκατα (énkata, intestines)” (from here). This is what the other review by by Sue Halpern does in taking the ‘robots and algorithms [are] taking over’ line .

There’s an assumption built into this, of our separation from machines, code, algorithms, circuitry, hardware. Us and them. The narrative of machines taking over, de-skilling and ejecting humans from their jobs presents a scenario that is too black-and-white for me; it doesn’t look at ways in which human labour is exploited by how we produce a commodity called data that results in the eventual accumulation of wealth in the hands of a few companies and individuals. It ignores the reality that we are already a little hybrid, already a little cyborg and closely connected to our machines and algorithms. While Halpern does look at American unemployment rates over decades and is fairly measured, she ends with “We, the people, are on our own here—though if the AI developers have their way, not for long.”

So, the word of the week is heteromation, which is about those unfashionable things like labour, politics and social criticism, which discussions about tech sometimes forget are there, as Morozov laments.

You’d want to steer clear of most things prefixed by ‘hetero’ but heteromation, a new-ish concept, I think, is not one of them. Proposed by Hamid Ekbia and Bonie Nardi (2014), heteromation is about how automation does not necessarily erase labour or jobs, but rather, displaces them elsewhere.

The perspective of heteromation examines the social dynamics, forces and power relations that underlie how labour is divided between machines and humans, and why we come to believe that humans can do certain kinds of digital labour, and machines others. If automation was the first phase, of the machine taking centre-stage, and augmentation being another where the machine ‘comes to the rescue’, then heteromation is a third phase where ‘the machine calls for help’.

Through case studies of Mechanical Turk, citizen science projects like Fold It and video games, Ekbia and Nardi describe what heteromation is and how it works.  In the gaming industry, players sit on tribunals to police other players’ behaviours and respond to complaints, because responding to every email about players’ toxic behaviour is just not possible. Mechanical Turk, a product from Amazon, is a clearinghouse of humans tagging images, transcribing audio snippets and other such ‘data janitorial’ tasks for online services. Jeff Bezos’ infamous line describing Mechanical Turk was ‘you’ve heard as software-as-a-service; this is humans-as-a-service.’ On low-paid, temporary and short term contracts, Mechanical Turkers and other data janitors work in incredibly precarious conditions that would seem appalling if they were in a factory:

“Employers, therefore, must consider employees as functionaries in “an algorithmic system,” forcing the labor relation even further along the path of ruthless objectification than Ford or Taylor could have imagined. Those humans rendered as bits of algorithmic function disappear into relations with oblivious employers “on autopilot.” Workers are largely “invisible,””

Another recent, excellent read on digital labour is from the The New Inquiry that goes into some of the history and present of it. Here’s some lines from that piece called The Ladies Vanish (which starts with a killer story about a Google employee who was fired for asking questions about the women who scan the books in for Google Books):

“..almost 70% of mechanical turkers were women. How shocking: the low prestige, invisible, poorly paid jobs on the internet are filled by women. Women provide the behind the scenes labor that is mystified as the work of computers, unglamorous work transformed into apparent algorithmic perfection…The computer itself is a feminized item. The history of the computer is the history of unappreciated female labor hidden behind “technology,” a screen (a literal screen) erected by boy geniuses.”

There is still place for politics and social criticism Mr. Morozov, I guess it’s not the Nicholas Carrs who are doing it.

 

 

 

Biting off the big data beast

So it has happened: I’m registering for a PhD. I’ve bitten off the big data beast and decided to focus  on ethics and big data. I’m going to turn this blog into a place to start documenting some of my writing along the way. I haven’t formally registered for the degree yet and when I do, I’m sure I’ll post information about it. This post summarises some of the reading I did in exploring ethics and technology.

Ethical Apps

An interest in big data was always on the sidelines given my work at Tactical Tech. It was sometime  in early 2014 when I came across ethical apps (there isn’t actually a Wikipedia page about ethical apps but here’s a list of them) and was horrified and amused to discover that apps giving users shopping and ‘sustainable consumption’ advice are labeled ‘ethical’. I was instantly intrigued and decided to look into ethical apps. (Ethical apps are not the site of my research though they present an interesting case to look at how a mainstream discourse about ethics is being shaped)

‘Ethical apps’ use publicly available data to present evidence of how consumer choices contribute to the  destruction of natural resources and the environment. The rationale goes like this: if individuals are given information about the political and material implications of their choices, they will be inspired to make different consumer choices. Ethical apps work to help users either avoid certain choices and make different ones; to rethink their need to consume by substituting it with actions such as swapping and up-cycling; or by directly contributing donations to charities every time a particular browser plugin or app is used, whether or not the customer actually makes an ‘ethical choice’. There is scant qualitative research into the efficacy of these apps. They have however been  neatly  criticised .

Interestingly, a google search reveals that this is not what data-driven ethics are, though it could be. In the context of ethical apps, ‘data-driven ethics’ could mean that data about something results in an awareness of the ethical implications.  However, ‘data-driven ethics’  refers to ethical issues raised by the implications of big data for privacy and the law, journalism, research etc etc. What’s worth looking at here is that big data determines the approach to ethics; what has resonance for me is that what is actually the object, big data, becomes the subject. Said more provocatively, big data studies you, instead of you studying big data. (The idea that big data, comprised as it is of billions of users’ subjectivities, is not a specific event or object and is itself a sort of subjectivity, is another thing but I’m not going there).

Ethical apps show that at one level, access to information is seen as a basis for positive or progressive action. The rationale goes like this: if you know about (i.e have information about) <insert issue> and still do not act on that information to do something different, then does that mean you don’t care about <insert issue>  (The construction of ‘care’ for the environment and the emotional manipulation-by-disaster-scenario that the climate change movement has deployed is another post for another time). The co-optation of ‘ethics’ here, then, is that access to and use of information to do the ‘right’ thing is also the ethical thing to do.

Ethical apps also intrigued me because of the different ways in which the quantified self takes shape; in this case, a quantified self that isn’t about tracking physiological states or fitness levels, but is about morality, reasoning and choice-making. Imagine, a version of the self that relies on feedback loops and public data to make reasoned choices. For me this quickly slid into a conversation about AI, or at least something capable of more complex functions than that narrow AI we live with. But more on that later.

Machine fantasies

Some of the other stuff I came across in reading up about ethical apps was about the connection between the climate change movement and the use of information to understand it and manage it.  At the time I was reading about ethical apps and the climate change movement, a colleague serendipitously posted a link to Adam Curtis’ excellent documentary series All Watched Over by Machines of Loving Grace (three episodes over an hour;  watch them here) . Some of what follows is based on ideas in the second documentary in that series, ‘the use and abuse of vegetational concepts’. This particular episode provides a fascinating insight into where some of the discursive ideas girding the climate change movement have come from.

 

There is a ‘machine fantasy’ of nature that scholars and researchers came to believe: that natural environments are comprised of self-regulating feedback loops; that by communicating with all parts of itself, nature will arrive at an ‘understanding’ of its state and based on that, revert back to balance; that nature is, in effect, a machine.

This is an early idea in the trajectory of how machine logic has been applied to supposing how nature functions, and of ecology studies, promoted by Arthur Tansley and other in the interwar years.  Cyberneticians and systems theorists from MIT in the 1970s took this further. They were instrumental in arriving at ‘scientific’ evidence of impending ecological disaster by applying systems dynamics theories and computer modeling to hundreds of complex variables about the environment assembled within a program. ‘Nature as a system of stability and balance’ discourse was thus generated by cybernetics theorists.

(A very fascinating, completely digressive thread is about how the postwar years in the science departments on the American east and west coasts saw the promotion of specific areas of work, which are linked:  cybernetics and systems theories, operant conditioning, game theory.)

Jay Forrester‘s book The Limits of Growth was seminal in using computer generated modeling in forecasting how natural resources could not sustain the predicted growth of human societies.  This gained traction at a time when the earth as our ‘home’ was also gaining currency and a growing environmental movement (some more on that here by @tattinot and myself for a work project). However, according to the film, more recently updated versions of ecology studies based on empirical evidence, suggest that nature is actually unpredictable, constantly changing and resetting the norms by which it functions. Yet, it appears to be very difficult to let go of the idea of the natural environment as a self-regulating mechanism that seeks ‘balance’.

There is also a connection between the development of cybernetics and AI and this has implications for discussions of ethics; however, we’re not anywhere near being ‘taken over by the machines’, no matter what sort of loud chest-beating the campaign against killer robots does.

Ethics, ethical apps, quantified self, AI…. a PhD is mostly an exercise (it seems to me at this point in time) in being very focused and knowing how to ask and answer a single question. So, sadly, while I probably won’t have a question that covers all of these areas, at least I get to dabble in all of them to some extent over the next few years.

Here’s a random picture of trees to end this, because trees are wonderful and there were some references to nature in this post. This was taken in my aunt’s (tea) garden where I have spent many happy summers.

 

Trees in the monsoon in the Nilgiris.
Trees in the monsoon in the Nilgiris.

 

 

 

 

Hot Flash

A dwarf called Warren runs the Internet of Things facility and I am in love with him. You can never really rationally explain why you love someone, you just do. Warren is in trouble with his refrigerator.  The refrigerator started messaging HOMELYNX about how the cucumber supply was going down faster than usual. For one thing, there shouldn’t even be cucumbers in the refrigerator, and while the most recent supply could be rationalised by the tubs of hummus, labneh and borani – guests – it was still going down very fast. Had anything else reported something irregular about the cucumbers?  It turns out that the waste disposal unit could verify that cucumber peels had been identified and the toilet could detect traces of it; so we know they hadn’t been thrown out of the window at an unsuspecting passerby. That would have been funny, actually, especially if there was such a thing as a window or a passerby around here. No, all you have here is the hum and rinse of electricity through your hair.

The thing is, Warren doesn’t even eat cucumbers, they were left over from the crudite plate at the farewell party for the Chief. Not wanting to waste them, and knowing I love cucumbers, Warren just put the extras in the fridge. Some things are perfectly rational and explain-able but the problem with rationality is that everyone has their own version of it.

Warren maintains a section of the main server farm, MEM046Z where the Internet of Things is made, and he isn’t supposed to fall in love. He certainly isn’t supposed to fall in love with someone he met online who can only stand to eat cucumbers and yoghurt all summer and thinks she is a Timurid’s Wife. The Internet of Things is a high security facility and no one is allowed to enter except authorised personnel and certainly not any Central Asian types – real or imaginary.

The irony doesn’t escape us that it all started with the very same tattling refrigerator having a Twitter exchange with @thetimuridswife. I also love melons and ice-cream and the refrigerator was telling me about the history of ice-cream making, and kulfis in particular, long before modern refrigeration.  (Kulfi has been appropriated by the Indians but it actually came from Central Asia.) If you pulled up the logs you’d see Twitter exchanges about flavours and their pairings, tweets that made sense to no one else but the two of us. It started with the refrigerator tweeting ‘beetroots & mustard’. Then, I tweeted

@thetimuridswife parmesan and chocolate

hesitantly, and waited to see what would happen. And then it came:

@coolhuntings23 blue cheese and pear

@thetimuridswife chocolate and onions

@coolhuntings23 green beans and oranges

There are no secrets with a dwarf. The dwarf had hacked into the refrigerator’s Twitter ID and was tweeting as it, without the refrigerator realising it had been compromised. It had always been him, and me;  the refrigerator was just a.. Trojan horse.

Over a series of Twitter exchanges I told Warren all about my travels and reincarnation. I am a Timurid’s wife and the fleshy concubine to a Sassanid warlord in ancient Samarqand, “a city so steeped in poetry that even medical doctors wrote their treatises in verse.” As a result I am something of a secret agent with very high levels of security clearance. Uzbek, in those days, far outstripped Persian as a language. Persian had one word for crying; Uzbek had over a hundred. Crying like a baby hiccuping, crying as if you have lost your keys, crying as if your parents have died, crying over beautiful poetry, crying for the way you used to love someone and don’t anymore. Samarqand was so far advanced in the sciences, art, architecture, medicine, astronomy, poetics… . Warren thinks that sometimes I’m doing other people’s share of make believe as well.

He lied about there being another person in the house eating cucumbers. He said he had changed his diet but it turns out the feeds from the heat sensors revealed a second person in the house. Once they all started pooling all their data and looking at everything that wasn’t Warren, they found me.  I couldn’t help it, I’m menopausal, and all that seems to keep me cool is a diet of cucumbers and yoghurt. (Dill and garlic in the mix never hurt)

It wasn’t easy to hide from a house; it’s like being 12 again and all the girls are whispering about you behind your back and you absolutely know they are but can’t seem to get even the smallest piece of information from anyone about it or make them stop. It is like the time your best friend found and read your secret diary.

Warren said we should just continue as normal – quietly, he going about his work and me reading, studying and writing. In the evenings we would eat and cheat at cards and giggle over other people’s data streams. It was only a matter of time before they came for us. Till then he told me to play with his hair and tell him about the siege on Samarqand.