Home » Internet Studies

Category Archives: Internet Studies

Advertisements

Typing out Loud: digitalisation as a problematic modernising offensive

What is “modernity”? Many social and political theorists use this term to describe the outcome of significant changes in West – the Enlightenment, the French and American Revolutions, the Industrial Revolution – that separate them from “pre-modern” conditions. A frequent interest of modernity theory is the possibility that these kinds of changes are now going global, and what consequences this has for both the world, and for the concept of “modernity” itself.

The concept is certainly problematic already. The claim to the status of modernity, of “being modern”, is implicitly a claim to an inherent superiority. This holds even if modernity is becoming a global phenomenon, since “the West” still claims to be the original source of modernity. Moreover, where the concept of being modern in the West usually entailed ideas of social and economic progress, this progress was usually bought at the cost of the conquest and exploitation of non-Western peoples. The history of European and Western “modernity” is equally a history of international colonialism and imperialism (Knauft 2002)

Contemporary modernity theorists are generally aware of these problems in varying degrees, and attempt to take them into account. In so doing, the moral weight of the concept of modernity of course becomes ambiguous. While most modernity theorists, in contrast to theorists of postmodernity, retain a sense that the “project of modernity” (Habermas 1983) still has moral goals that are conceivably of universal applicability to humanity, they now treat the possibility of identifying these goals and actually implementing them as much more fraught and ambiguous than was suggested by the more naive views of “progress” held by European and Western social and political theorists of the 18th through 20th centuries.

As a consequence, modernity theorists often claim that there is no single model of modernity. Different societies and cultures can be modern in very different ways. Some take this to further suggest that Europe itself has experienced different kinds of modernity in its history. Wagner (1994; 2013) is one such modernity theorist. Moreover, he attributes the existence of multiple modernities to an ambiguity inherent in the moral values underpinning modernity itself.

Wagner (2013) claims that there are two main ideals that underpin any expression of modernity: autonomy and rational mastery. The relationship between these two ideals is inherently ambiguous: increasing autonomy often works against the possibility of mastery by some; increasing mastery often works against the possibility of autonomy of some. Both the expression of these two ideals, and the way they are accommodated (or not) to each other, can and does vary tremendously. A modern collective experiences crisis when the accommodation between these expressions of ideals breaks down, which they all too frequently do. Wagner (1994) characterises the history of modern Europe as a succession of types of modernity, each new one implemented in response to the crisis of the previous one.

Wagner’s theory of modernity is also distinctive in that he doesn’t entirely treat the process of modernisation as the ongoing  and continuous work of completely impersonal forces. Rather, he claims that modernisation is partial, sporadic, and can often change course. This is because he claims that modernisation is the product of the actions of social agents. Modernisation, as a transformation of social structure and of knowledge (including knowledge of values), occurs through the empirical workings of situated social actors. These social actors need to have access to the power and resources of a society that will enable them to make the changes they desire, and they will usually not do so uncontested. This accounts for why conditions associated with modernity appear unevenly in history and around the globe.

The reason that modernity is multiple is because the social activity of modernisation, when it occurs, is done in the name of implementing the ideals of modernity. But there is no set way in which these ideals can be implemented. As already mentioned, the relationship between the ideals of autonomy and self-mastery is an ambiguous one. But the implementation of even a single ideal can be imagined very differently too. American liberal democracy, Marxist communism, and European democratic socialism all have very different ideas about what “autonomy” means in practice. But they are all forms of modernity.

Sometimes, further, modernisation entails not only the deliberate effort to implement a modern ideal, but also a deliberate effort to undo prior modernisation efforts that implemented the ideal (or ideals) of modernity differently. Such efforts at transformation can also of course be aimed at attempting to clean up the unintended consequences of earlier modernisation attempts, for example where focus on one ideal might have to led to the other ideal getting implemented poorly or haphazardly.

Wagner (1994) refers to such intentional efforts at modernisation as “modernising offensives”. In this regard, I would say that the development and diffusion of digital technology throughout the globe, in both Western and non-Western societies, is part pf the process of such modernising offensive. The social agents are, currently, the tech elites of Silicon Valley. The ideal is autonomy. The expression of it is “empowerment through digital media”, envisioned perhaps as connectivity (Schmidt or Cohen 2014) or as access to near-infinite data (DuBravac 2015). There are almost certainly others. In any case, their extreme focus on autonomy has left the place of rational mastery wide open. This is a problem.

In this particular modernising offensive, there seems to be a presumption that rationality is inherent to the technology itself. Such presumption no doubt contributes to the claims by the modernisers that the technological transformations they envision are “inevitable” (Schmidt & Cohen 2014, p.261), something that “will happen regardless of which road we take (DuBravac 2015, p. xxii). The promise of empowerment through technology is so great that it will be sought after and implemented no matter what. It’s simply the rational thing to do.

Empowerment implies liberation of the self from the constraints of others, but it also implies the ability to constrain. The fear that I have is that the tech elite engaged in this “digitalising” modernising offensive have presumed that they don’t need to worry about how their commitment to autonomy might lead to problematic implementations of mastery in society. A longstanding critique of classical modernity is that the will to power over nature becomes a will to power over people (Horkheimer & Adorno 2002). In terms of this new “informational” or “digital” modernity, the digital modernisers – the tech elites of Silicon Valley – seem to express an interest in possible problems of people having mastery over people only to the extent that they presume digital media will liberate people from the old forms of social control.

Even if they do (which is far from assured), that doesn’t eliminate the possibility of new ways of exerting control coming about as an unintended consequence of a digitalising modernising offensive. Indeed, I suspect that they already are, in the form of new ways of manipulating both individual and collective attention.

The problem with much criticism of contemporary technology is that the proponents of it all too readily paint their opponents as Luddites. Since technology is also the core resource of their modernising offensive – both as the means of engaging in it and as the resource to be diffused, opposition to such an effort doesn’t just seem anti-technology, it seems anti-modern. A critique of this modernising offensive, in a society that values modernity, needs to perform that critique in the name of modernity too. But it needs to be in the name of a different image of modernity. It needs to be a modernising offensive of its own, one that can explicitly explain the intended nature its commitment to both autonomy and mastery, and can explicitly explain the means of reaching an accommodation between them. To be honest, currently I’m not sure if that’s even possible.

References

DuBravac, S 2015, Digital destiny: how the new age of data will transform the way we work, live, and communicate, Regnery Publishing, Washington, DC.
Habermas, J 1983, ‘Modernity: An Incomplete Project’, in H Foster (ed), The Anti-Aesthetic: Essays on Postmodern Culture, Bay Press, Seattle, WA, pp. 3–15.
Knauft, BM 2002, ‘Critically Modern: An Introduction’, in BM Knauft (ed), Critically Modern: Alternatives, Alterities, Anthropologies, Indiana University Press, Bloomington, IN, pp. 1–54.
Schmidt, E & Cohen, J 2014, The new digital age: transforming nations, businesses, and our lives First Vintage Books Edition., Vintage Books, New York.
Wagner, P 1994, A Sociology of Modernity: Liberty and Discipline, Routledge, New York.
Wagner, P 2013, Modernity as Experience and Interpretation 1st edn, Wiley, Hoboken.
Advertisements

Monopolies of Knowledge: A Better Conception, Illustrated by the History of Hyperlinks

Primitive understandings of Harold Innis’ concept of “monopolies of knowledge” regard it as information-hoarding. What I think of as “vulgar Innisianism” treats such hoarding as deliberately motivated, initiated and maintained by an elite who intentionally use the monopoly to preserve their elite status. Both these simplistic assumptions greatly under-estimate the importance and explanatory power of the concept.

In an article dedicated to showing how the new concept of “deep links” in mobile apps isn’t actually all that new, Scott Rosenberg includes a discussion of how the nature of hyperlinks on the Web has changed, and how the Google search engine is heavily implicated in that change. It illustrates ways in which monopolies of knowledge can be conceived that go beyond primitive and vulgar Innisianism.

As noted in the article, the original idea behind hyperlinks was to create hypertext. Hypertext wasn’t a technological form so much as it was a concept. The concept was text that wasn’t linear and determinate. A reader could shift to and from different texts, backtracking and diversifying the trajectory of their reading at will. The hyperlink, as originally implemented via the protocols developed by Tim Berners-Lee, offered a way of partially implementing this notion. The hyperlink could be, and in the early days of the web often was, a way of linking a word or phrase occurring in the middle of a text to another document, or even another section of another document. Rosenberg notes the experience that this created:

Here’s the hardest thing to remember about discovering links at the dawn of the Web: They were fun. As journalist Gary Wolf put it in the lead of a 1994 Wired piece that introduced the Web browser Mosaic to a wide readership: “Mosaic is not the most direct way to find online information. Nor is it the most powerful. It is merely the most pleasurable way… By following the links — click, and the linked document appears — you can travel through the online world along paths of whim and intuition.”

James W Carey (Communication of Culture, pp. 148-9) provided a more complex idea of monopolies of knowledge that goes beyond mere information-hoarding. While acknowledging that one form of monopoly could refer to the hoarding of “factual information or data”, he claimed:

There is, however, a more stringent sense of the meaning of a monopoly of knowledge. When one speaks, let us say, of the monopoly of religious knowledge, of the institutional church, one is not referring to the control of particles of information. Instead, one is referring to control of the entire system of thought, or paradigm.

Monopolies of knowledge don’t just apply to information. If they can apply to control of entire systems of thought, they can refer to modes of knowledge that aren’t obviously informational. They can affect the answer to questions like “what is a hyperlink used for? And why?”, for instance. They can change the nature of know-how, and of interpretation of reality.

Rosenberg more or less argues that this is exactly what happened to the know-how associated with creating and understanding hyperlinks, courtesy of the Google search engine. And in doing so it centralised their power. Initially, Rosenberg claims, the “power” of hyperlinks resided in their ability to “subvert hierarchy”. But the Google search engine operated on a different assumption about what hyperlinks were: it “showed us that links could be read as signals of authority and value”. It basically redefined the answer to the question of what a hyperlink is for, and why.

And it managed to propagate that redefined answer. It did so because thinking of hyperlinks in this way was just so useful. It turned searching the web from a hit-and-miss affair to one where you can almost always find the most relevant result to your query, often with only the vaguest idea about how to formulate your query string. But in doing so, it removed the original conception of what a hyperlink could be:

Links suddenly weren’t so much fun any more. They stopped serving us as an alternative way of thinking about and creating informational relationships; they settled into a functional role. They became tools for navigating websites and pointers for sharing content on social networks. Finally, links became click-bait — transparent come-ons for traffic in an accelerating race to the bottom of our brainstems. We found ourselves arguing whether links help us see connections or just distract us or make us stupid.

Rosenberg touches on the vulgar version of Innisian monopolies when he points out that Google makes money because “they put a price tag” on links. But he offers a way of understanding a more expansive, and less vulgar, version of it, when he points out that Google, just by existing, has changed the default practice of hyperlinking and the default understanding of what hyperlinks are for, and that Google specifically did not intend to do this. They wanted to help people find information, not redefine what “hyperlinking” meant. Even so, that is what happened.

Like many of his core concepts, Innis never specifically defined what he meant by the concept of “monopolies of knowledge”. It falls to those who seek to build on his work to try not only to understand their potential applicability today, but how best to conceptualise them today so that they might best be applicable. By counting interpretive, pragmatic and other forms of knowledge, and by avoiding the temptation to always ascribe either the creation of a monopoly or its maintenance to deliberate intentionality, I think this concept of Innis’ can be applied much more fruitfully to the contemporary communication environment.

An example to consider: Facebook’s Internet.org project gets criticised because of the financial incentive that lurks behind it. A more pressing critique, one that sees the project as potentially creating problems not intended by the project creator, would ask if, simply by existing, Internet.org might transform the understanding of what the Internet actually is. My initial answer is that, without necessarily meaning to, it drastically changes the nature of the Internet into one in which the Internet is, for all intents and purposes, Facebook.

Legacy Facebook Profiles: Digital Memorial or Post-Mortem Persona?

Facebook recently changed the way online profiles work in the event of a profile owner’s death. A new feature, not yet available in all countries, allows existing (living) users to add a legacy contact to their account. In the event of a profile owner’s death, the legacy contact will have a limited amount of control over the deceased user’s profile: they can add new contact to the Timeline, add new friends, and change the profile picture and cover photo.

For the most part this is simply an extension of the existing service that Facebook offered for memorialising accounts of the deceased. Previously, memorialised accounts could not be changed by anyone else. This in a few cases meant that profiles of a deceased person were not memorialised but were then controlled by someone else who knew the deceased user’s login credentials. The legacy contact feature would appear to eliminate the need for this somewhat awkward arrangement.

The theory driving the change, and Facebook’s memorialisation service in general, is that the profile of a deceased user is a memorial to that user. Memorialised accounts, according to the Facebook help page on memorialisation, are “a way for people on Facebook to remember and celebrate those who’ve passed away”. Similarly, a Facebook spokesman told Mashable that “Memorialization allows friends and family to post remembrances and honor a deceased user’s memory, while protecting the account and respecting the privacy of the deceased.” This implicit understanding of memorialised profiles is rendered explicit by a further tweak Facebook did to memorialised accounts: prepending “remembering” to the displayed name. Now it’s quite clear that a memorialised account isn’t the same as a Facebook account of a living person.

I find Facebook to be actually somewhat ahead of the curve when dealing with issues of thanatosensitivity in social software, or the need to take into account the unavoidable reality of death, largely because Facebook to run into the issues first given the extent and ubiquity of its service. Even so, I worry that these changes are being driven by the privileging of one possible model of understanding post-mortem profiles – that of a digital memorial – at the expense of others. In particular, many people treat post-mortem profiles as what I would call a post-mortem persona.

In quite a few cases, those with the authority to request the memorialisation of a deceased users’ Facebook profile will not do so, even when they are aware of the facility. Researcher Natalie Pennington, in her study of Facebook mourners, found that many of the people she interviewed, even when they knew of the memorialisation feature, “indicated that they were not interested in turning the profile into a memorial page…because the personal touches provided by the deceased that helps them to feel so connected were removed and left them less connected”. Similarly, Stephanie Buck, writing for Mashable, claims that “most users don’t raise a Facebook flag [for profile memorialisation] at all, choosing instead to peruse and interact with a person’s regular Facebook presence even after his or her demise”. Many, though by no means all, those who mourn on Facebook appear not to want a post-mortem profile to be a de-personalised memorial, but to in some way retain the uniqueness and individuality of the person as they existed in life.

To some extent, then, I suspect many mourners don’t want a stark division to exist between a person’s social media profile as it existed in life and as it exists after their death. This is quite in line with the “Continuing Bonds” theory of grief, in which grieving is presumed not to be a way of letting go of a relationship so much as renegotiating the relationship one continues to have with the deceased, even though the deceased themselves no longer actively plays a part in maintaining it. The official memorialisation process, including the new features of allowing legacy control of an account and emphasising the division between living and deceased people’s profiles, seems to me to work against those who wish to use the profile in this way: as a continuing, albeit post-mortem, persona rather than a new memorial, severed through memorialisation from the life of the person that went before.

This is complicated of course by the fact that there are other mourners, mentioned in Stephanie Buck’s article for instance, for whom the continuing presence of a “live” profile of a deceased person is an imposition and unwanted reminder of grief rather than an aid to mourning. Such people may prefer memorialisation or deletion of the account altogether. I guess the point is that for all that Facebook is trying to do the right thing, their proposed solution is too biased towards one possible view of what a deceased person’s profile actually is to others at the expense of another, possibly more common one.

Should a Facebook profile of a deceased person be a memorial and nothing more, or is it reasonable to treat it as an expression of persona even in the absence of a living person performing that persona?

How the Facebook Manipulation Experiment Showed the Importance of “Algorithmic Neutrality” to Social Media by Violating it

It’s been a few days now since online outrage started over a recently-published study involving Facebook. The study involved the manipulation of people’s Facebook News Feeds in order to assess the possible existence of a phenomenon of “emotional contagion”. Basically the research entailed slightly changing what showed up on nearly 700,000 people’s Facebook News Feeds. The amount of “positive” or “negative” messages displayed on people’s News Feeds was systematically varied, in order to see if this had any effect on how many “positive” or “negative” messages were subsequently posted by the people who read those News Feeds. The study found that there was a small but statistically significant positive correlation between the overall emotional tenor of the messages a person saw in their News Feed and the overall emotional tenor of their own Facebook posting. But the findings of the study is not what so many people have found so disconcerting about it.

The source of outrage seems to be more around the way in which the study was performed. But the actual details of what is so problematic about the study’s ethical and methodological choices is not always well expressed. It doesn’t help either that there seem to be several issues of concern at stake: issues of informed consent in experiments, issues of privacy, issues of when it is or isn’t ethically acceptable to manipulate someone.

It’s that last one that interests me, and it’s the one that I think makes most people outside of the Silicon Valley tech-culture bubble intuitively feel that a study that manipulates Facebook News Feeds is somehow “creepy”, even if they may have trouble putting the reason for their feelings in words. Rather than denigrate this intuitive reaction as somehow the product of an “online tantrum industry“, it might be more worthwhile to investigate why that feeling occurs, and see if it’s justified. I believe that this intuition stems from an unstated assumption about social media and web 2.0, an assumption so ingrained that it’s only when that assumption got violated in this study that it could become clear that it was an assumption.

Call it “algorithmic neutrality”. The idea of algorithmic neutrality is that, where a software service chooses what to show or not to show an individual, as already occurs in Facebook’s News Feed, this decision-making process is (a) automated, and (b) the automation is implemented in exactly the same way for each individual user. This is true even when services are “personalised”: in algorithmically neutral personalisation of services, the process of personalisation is algorithmically determined, not by any manual inspection of what an individual person does or does not do on that service.

Most contemporary social media services don’t make a point of indicating algorithmic neutrality because it’s almost always just assumed. In fact it’s only if algorithmic neutrality is assumed that it becomes possible for the “revolutionary” rhetoric of social media to make any sense at all. Social media, according to this rhetoric, is “democratising” and “empowering” precisely because no government or corporation stands in the way of masses of ordinary people communicating and coordinating their activities via social media platforms like Facebook. Implicit in this assumption is that Facebook, for example, isn’t involved in altering the content of what people see in any way that might be subject to political manipulation.

This is why the study comes across as creepy, even though Facebook routinely configures what appears on people’s News Feeds. The routine configuration is algorithmically neutral, automatically processed according to the pre-set, universally applicable rules of the EdgeRank ™ algorithm. The manipulations in the study were not.

By making intentional, un-algorithmically determined alterations to Facebook News Feeds, the Facebook emotional manipulation study violated the principle of algorithmic neutrality. Furthermore, in doing so it contradicted a foundational assumption of what supposedly makes social media politically revolutionary. It’s fascinating to watch defenders of this study make this exact point about the non-revolutionary nature of social media, essentially saying that Facebook is a media business just like any other media business, and any changes to their informational product (not yours), such as manual alterations in the way News Feed data gets displayed, are just like any other media business changing their product. New media, in this framework, is just as corporate-owned and corporate-ruled as old media.

Is social media really “revolutionary”? I’d say that it still could be, to the extent that algorithmic neutrality can be adopted as an ethical principle. But that also raises the question of whether a media business can commit to algorithmic neutrality and still remain profitable in the face of competitors who might decide not to be so ethical. There’s also the question of whether “algorithmic neutrality” is really possible, as there’s a small but burgeoning amount of literature that an algorithmic implementation can never be politically “neutral”, but is always governed by the political circumstances of its implementation. And then there’s the question of how exactly, in the absence of detail of a company’s inner workings, one can determine whether a company is really being algorithmically neutral or not. In any case, I think that the default assumption that social media companies are automatically algorithmically neutral has now been both exposed and demonstrated as an unwarranted assumption by this study. And this is the main reason why the study is intuitively “creepy”, at least within the context of how people, bombarded with the message of how social media is “revolutionary”, “democratising”, and “empowering” as they are, previously expected social media services to operate.

 

Analyses of time-space remediation in online media: from MUDs and MOOs to a new “hypermediacy”?

Going back to some academic readings from the 1990s, it’s surprising just how much it was taken for granted that virtual reality was just on the verge of becoming the Next Big Thing [tm]. It’s also a reminder just how seriously people took the primitive, text-based communication environments of MUDs and MOOs, where people engaged in real-time, text-based interaction, including text-based construction (through description and narrative) of their “cyberspace environment”. The relationship is not coincidental, as MUDs and MOOs were often regarded, as Mark Poster put it in The Second Media Age (p51) as “transitional forms of virtual reality”.

Today, you’d be hard-pressed to find someone among the general population who’d even heard of these arcane online “environments”. If they could still be considered as the first steps into “virtual reality”, then their successors would be Second Life or MMORPGs like World of Warcraft. While these have enjoyed some measure of success, they’re a far cry from the “wave of the future” expectations that drew such interest in MUDs and MOOs. By far the most popular mainstream form of Internet-based media today is so-called “social media”. And those forms of media, like Facebook and Twitter, don’t conform to the model of  a “virtual reality” “cyberspace” at all.

I think a lot of people studying the Internet in the 1990s assumed that the everyday experiences of time and space would simply be transposed into online media environments. There wasn’t much focus in 90s discourse on the possibility that online time-space experience might be radically different.

There were two academics in the 1990s who hedged their bets a little. Jay Bolter and Richard Grusin published a book in 1999 called Remediation. Their basic thesis was that all forms of media tended to be culturally defined in terms of other media. The distinctiveness of their theory was in their claim that, in contemporary Western societies at least, cultural processes of comparative definition always attempted to portray a particular media form as superior to other media in terms of two competing human needs: the desire f0r transparent immediacy, where the experience of using the media form felt like no media was involved; and the desire for hypermediacy, where the experience of using the media form brought the fact of its mediated nature to the forefront of attention. The different ways in which different media were defined in terms of whether they were better at providing transparent immediacy or hypermediacy constituted “the logic of remediation”.

Both transparent immediacy and hypermediacy are differently involved in different ways of creating an “authentic” experience. Transparent immediacy in media is claimed to achieve this through the assertion that a particular mediated experience is equivalent to unmediated experience and therefore “authentic” in that way. Hypermediacy is a little trickier. Bolter and Grusin suggest at one point that hypermediacy always involves a proliferation of heterogenous and fragmented content presented in media, such as you find in a contemporary multi-windowed graphical user environment. This supposedly creates a more “authentic” experience because, in multiplying “the signs of mediation”, the result “tries to reproduce the rich sensorium of human experience” (p34). I’m not fully on board with this description of it, though.

It’s fairly clear that the promise of virtual reality is its claimed superiority in providing transparent immediacy: a fully immersive 3-dimensional environment would be much more realistic than the flat, two-dimensional screens that characterise almost every other form of visual media. It’s not that hard either to see, just from looking at Facebook’s main interface, that there’s a logic of hypermediacy going on: disparate pieces of information are being bundled together in a way that would be impossible without Facebook’s mediation. At its most basic, the dominance of social media over (current efforts at) virtual reality suggests that in the logic of online remediation, the desire for hypermediacy has beenstronger than the desire for transparent immediacy, contrary to the beliefs of people studying the Internet in the 1990s. But I think it’s somewhat more complicated than that. This is because I don’t think Bolter and Grusin got their “hypermediacy” concept quite right.

They skirt the edges of what hypermediacy could actually mean very late in their book, when they suggest that “the logic of hypermediacy” entails the “crowding together of images, the insistence that everything that technology can present must be presented at one time” (p269). The “proliferation”, “heterogeneity” and “crowding together” of hypermediacy, I would suggest, is just one of a number of ways in which media “hypermediate” the experience of time-space itself. Where “transparent immediacy” refers to a mediated experience of time and space that is identical to an unmediated one, hypermediacy refers to a mediated experience of time and space that is only possible through mediation. This could refer to a “crowding together” of images, as Bolter and Grusin suggest. However, it could also refer to, for instance, the placing of Friend activity in reverse chronological order on a Newsfeed, so that the reader’s temporal experience of reading them can be the exact opposite of how it would occur if it was unmediated. “Proliferation” isn’t the core of “hypermediacy”. The re-ordering of spatial and temporal experience is.

By adopting this perspective on hypermediacy, the “logic of remediation” can be considered as less of a generic quest for “authenticity” in mediated experience, and more as a way of considering what kinds of ways a particular culture wants to re-orient (or preserve) experiences of time and space, according to how those attempts are instantiated in that culture’s dominant media forms. This goes beyond just the dualism of immediacy/hypermediacy, and into specifics of just how and why certain specific re-orientations of time and space occur. With any luck, such considerations might reveal something fundamental about the culture at issue.

Just why does Facebook re-orient time-space experience in the way that it does?

Typing Out Loud: “Algorithmically social media”

The phrase “social media” gets thrown about quite a bit to describe some sort of new media form that didn’t exist before the 21st century. Plenty of people, including myself, have considered what it is about media forms like Facebook that might warrant it being described as an entirely new category of media, and whether “social media” is an adequate way of categorising the “new” media form if it truly is something new.

Part of my answer previously was that “social media” are distinctive in that they provide a way to make social links persistently present and visible, via Friending, in a way that requires no ongoing actual social interaction for that link to exist. Or rather, it requires no interaction for that link to be instantiated and made visible through software. The representation of a link through software doesn’t necessarily indicate the existence of an ongoing, affective bond. The problem of when it’s appropriate to unFriend someone due to lack of interaction arises from this discrepancy. There is no recognition in the software of the social situation of “naturally drifting apart”.

The prominent social networking site researcher danah boyd has referred to “autistic social software“. By this she means software which reduces the complexity of engaging in social life and social interaction to a limited set of simplistic, procedural steps. In other words, to an algorithm.

boyd in the linked speech above advocates trying to design technology that integrates with existing social practices, including taking account of how users might end up using new technology in ways completely unanticipated by the original designers, and facilitating rather than restricting those unanticipated uses through further technological development. I wonder whether there’s a serious challenge in actually achieving this in software, based on the fact (at least as I understand it) that all software is the instantiation of one or more algorithms. The problem of dealing with the scenario of people drifting apart is an example of where the algorithm doesn’t fit the situation. The algorithms governing Facebook’s Newsfeed may help illustrate that the issue may not be just that a particular algorithm doesn’t fit a particular situation, but the very attempt to apply algorithms to social life at all is problematic.

Algorithms automate. They makes tedious and repetitive tasks much faster and easier. Facebook’s initial introduction of the Newsfeed feature relieved users of the need to repeatedly visit their Friends’ profiles to find out what had changed. The feature faced some very heavy resistance initially, when the Newsfeed display was indiscriminate in what data from others it would display. However, once the option to manually filter what activity would be posted to Friends’ Newsfeeds was in place, the Newsfeed became one of the most important parts of the overall Facebook experience. It occupies a central position on the main Facebook interface to this day.

The effects of automation in this case extend beyond a mere quantitative reduction in the effort taken to acquire data. A new form of social awareness has been suggested to exist as a result of the automated aggregation of social data, most prominently in the Facebook Newsfeed. It’s been given the name “ambient awareness“. The consequences of this way of relating to others are still not entirely clear

The drawback of automation has been in the way that the algorithms at issue may select how certain data is displayed (or not displayed) in the Newsfeed. This is where most of the conflict over the Newsfeed has been played out. The initial selection decision for Newsfeed display was to show everything: users had no way to prevent their status updates, photo uploads, and most significantly, changes in relationship status from “in a relationship” to “single” from showing up in everybody else’s Newsfeeds. It was only after this option was provided that the furore died down. But even then, the implementation was that everything was reported by default, and opting out required a specific decision.

More recently, Facebook has provided two ways of ordering the Newsfeed: “Top News” and “Most Recent”. However, it seems that Facebook prefers users to use the Top News view and almost always defaults to that, even though many users prefer the Most Recent ordering by default. The Internet is full of articles from people venting their frustration at finding Facebook reverting to the Top News ordering no matter how times they switch the view to Most Recent (examples here, here, and here).

The Top News view is, in theory, quite useful. The idea seems to be that, given the massive proliferation of users and content that has occurred on Facebook, people would like an automated filter that rates content and displays only that which they’d be most interested in. From my own experience, some people do like this and prefer the Top News view for that reason. Unfortunately, many others do not, and would prefer their Newsfeed sorted in “chronological” as opposed to “algorithmic” order, as a Huffington post writer put it.

Yet “chronological ordering” is also an algorithmic ordering. It may seem like a more “natural” ordering, but it’s still automated. The algorithm is less about selection and more about ordering: the most recent activity appears first.

My impression is that this could be more or less in line with the phenomenological experience of how people relate to the past. The most recent past is the most immediately important and interesting, with older activity becoming less so the further back it goes. But this may not be the case. And there’s absolutely no reason why Facebook has to implement the Newsfeed in this way. Even with the Most Recent view existing as it does, it’s still quite possible to violate the normative order of reading. In coming back to the Newsfeed, why not scroll all the way back to the last activity that you’ve seen before, then start reading chronologically forward from there? Or do people already do that? I know I don’t, and I believe that it’s because Facebook’s Most Recent view makes it less effort to read things chronologically backwards. Or maybe that really is the most phenomenologically “natural” way of relating to past social activity. Or perhaps that’s the case only in Western society with its celebration of the present and future at the expense of the past. I’m really not sure.

Making selections about how certain activities are best performed is a vital part of pragmatic life, as Berger and Luckmann recognised long ago in their seminal text “the Social Construction of Reality“. Otherwise the sheer multitude of possibilities would lead to paralysis and nothing getting done. But placing the responsibility for that selection into the hands of an algorithm needs critique. This is true not just because it means that there can be conflict over the preferred algorithm to apply, as in the case of Facebook  Newsfeed and the battles over what should or should not be displayed, and how. Nor does it mean only that algorithms might not exactly fit the situation, as in the case of a Friend link still existing in software long after it represents any existing social relationship. It also means that certain default assumptions about things as basic as chronology and the “best” way of relating to the past are implemented and imposed, in such a way that the possibility of even considering whether such basic aspects of life are essential or contingent is rendered much harder.

This seems especially relevant when algorithms are being applied to social relationships and social interaction. I think now  that part of the difference between older forms of “sociable” media and modern-day “social media” is that the newer media forms are algorithmically social: they try to represent, and automate, aspects of social life through the application of algorithms to social life specifically. In that case, I think it bears asking just how we want our media to be algorithmically social, if that is indeed what we want.

 

 

Conference paper title of the day

(Re)Blogging Passions: Gay Porn in the Social Networking Era

I swear, I only found this because it showed up on my Academia.edu feed of “New Media”-tagged articles.