Home » 2014 » July

Monthly Archives: July 2014

Advertisements

How the Facebook Manipulation Experiment Showed the Importance of “Algorithmic Neutrality” to Social Media by Violating it

It’s been a few days now since online outrage started over a recently-published study involving Facebook. The study involved the manipulation of people’s Facebook News Feeds in order to assess the possible existence of a phenomenon of “emotional contagion”. Basically the research entailed slightly changing what showed up on nearly 700,000 people’s Facebook News Feeds. The amount of “positive” or “negative” messages displayed on people’s News Feeds was systematically varied, in order to see if this had any effect on how many “positive” or “negative” messages were subsequently posted by the people who read those News Feeds. The study found that there was a small but statistically significant positive correlation between the overall emotional tenor of the messages a person saw in their News Feed and the overall emotional tenor of their own Facebook posting. But the findings of the study is not what so many people have found so disconcerting about it.

The source of outrage seems to be more around the way in which the study was performed. But the actual details of what is so problematic about the study’s ethical and methodological choices is not always well expressed. It doesn’t help either that there seem to be several issues of concern at stake: issues of informed consent in experiments, issues of privacy, issues of when it is or isn’t ethically acceptable to manipulate someone.

It’s that last one that interests me, and it’s the one that I think makes most people outside of the Silicon Valley tech-culture bubble intuitively feel that a study that manipulates Facebook News Feeds is somehow “creepy”, even if they may have trouble putting the reason for their feelings in words. Rather than denigrate this intuitive reaction as somehow the product of an “online tantrum industry“, it might be more worthwhile to investigate why that feeling occurs, and see if it’s justified. I believe that this intuition stems from an unstated assumption about social media and web 2.0, an assumption so ingrained that it’s only when that assumption got violated in this study that it could become clear that it was an assumption.

Call it “algorithmic neutrality”. The idea of algorithmic neutrality is that, where a software service chooses what to show or not to show an individual, as already occurs in Facebook’s News Feed, this decision-making process is (a) automated, and (b) the automation is implemented in exactly the same way for each individual user. This is true even when services are “personalised”: in algorithmically neutral personalisation of services, the process of personalisation is algorithmically determined, not by any manual inspection of what an individual person does or does not do on that service.

Most contemporary social media services don’t make a point of indicating algorithmic neutrality because it’s almost always just assumed. In fact it’s only if algorithmic neutrality is assumed that it becomes possible for the “revolutionary” rhetoric of social media to make any sense at all. Social media, according to this rhetoric, is “democratising” and “empowering” precisely because no government or corporation stands in the way of masses of ordinary people communicating and coordinating their activities via social media platforms like Facebook. Implicit in this assumption is that Facebook, for example, isn’t involved in altering the content of what people see in any way that might be subject to political manipulation.

This is why the study comes across as creepy, even though Facebook routinely configures what appears on people’s News Feeds. The routine configuration is algorithmically neutral, automatically processed according to the pre-set, universally applicable rules of the EdgeRank ™ algorithm. The manipulations in the study were not.

By making intentional, un-algorithmically determined alterations to Facebook News Feeds, the Facebook emotional manipulation study violated the principle of algorithmic neutrality. Furthermore, in doing so it contradicted a foundational assumption of what supposedly makes social media politically revolutionary. It’s fascinating to watch defenders of this study make this exact point about the non-revolutionary nature of social media, essentially saying that Facebook is a media business just like any other media business, and any changes to their informational product (not yours), such as manual alterations in the way News Feed data gets displayed, are just like any other media business changing their product. New media, in this framework, is just as corporate-owned and corporate-ruled as old media.

Is social media really “revolutionary”? I’d say that it still could be, to the extent that algorithmic neutrality can be adopted as an ethical principle. But that also raises the question of whether a media business can commit to algorithmic neutrality and still remain profitable in the face of competitors who might decide not to be so ethical. There’s also the question of whether “algorithmic neutrality” is really possible, as there’s a small but burgeoning amount of literature that an algorithmic implementation can never be politically “neutral”, but is always governed by the political circumstances of its implementation. And then there’s the question of how exactly, in the absence of detail of a company’s inner workings, one can determine whether a company is really being algorithmically neutral or not. In any case, I think that the default assumption that social media companies are automatically algorithmically neutral has now been both exposed and demonstrated as an unwarranted assumption by this study. And this is the main reason why the study is intuitively “creepy”, at least within the context of how people, bombarded with the message of how social media is “revolutionary”, “democratising”, and “empowering” as they are, previously expected social media services to operate.

 

Advertisements