Home » Internet Studies
Category Archives: Internet Studies
What is “modernity”? Many social and political theorists use this term to describe the outcome of significant changes in West – the Enlightenment, the French and American Revolutions, the Industrial Revolution – that separate them from “pre-modern” conditions. A frequent interest of modernity theory is the possibility that these kinds of changes are now going global, and what consequences this has for both the world, and for the concept of “modernity” itself.
The concept is certainly problematic already. The claim to the status of modernity, of “being modern”, is implicitly a claim to an inherent superiority. This holds even if modernity is becoming a global phenomenon, since “the West” still claims to be the original source of modernity. Moreover, where the concept of being modern in the West usually entailed ideas of social and economic progress, this progress was usually bought at the cost of the conquest and exploitation of non-Western peoples. The history of European and Western “modernity” is equally a history of international colonialism and imperialism (Knauft 2002)
Contemporary modernity theorists are generally aware of these problems in varying degrees, and attempt to take them into account. In so doing, the moral weight of the concept of modernity of course becomes ambiguous. While most modernity theorists, in contrast to theorists of postmodernity, retain a sense that the “project of modernity” (Habermas 1983) still has moral goals that are conceivably of universal applicability to humanity, they now treat the possibility of identifying these goals and actually implementing them as much more fraught and ambiguous than was suggested by the more naive views of “progress” held by European and Western social and political theorists of the 18th through 20th centuries.
As a consequence, modernity theorists often claim that there is no single model of modernity. Different societies and cultures can be modern in very different ways. Some take this to further suggest that Europe itself has experienced different kinds of modernity in its history. Wagner (1994; 2013) is one such modernity theorist. Moreover, he attributes the existence of multiple modernities to an ambiguity inherent in the moral values underpinning modernity itself.
Wagner (2013) claims that there are two main ideals that underpin any expression of modernity: autonomy and rational mastery. The relationship between these two ideals is inherently ambiguous: increasing autonomy often works against the possibility of mastery by some; increasing mastery often works against the possibility of autonomy of some. Both the expression of these two ideals, and the way they are accommodated (or not) to each other, can and does vary tremendously. A modern collective experiences crisis when the accommodation between these expressions of ideals breaks down, which they all too frequently do. Wagner (1994) characterises the history of modern Europe as a succession of types of modernity, each new one implemented in response to the crisis of the previous one.
Wagner’s theory of modernity is also distinctive in that he doesn’t entirely treat the process of modernisation as the ongoing and continuous work of completely impersonal forces. Rather, he claims that modernisation is partial, sporadic, and can often change course. This is because he claims that modernisation is the product of the actions of social agents. Modernisation, as a transformation of social structure and of knowledge (including knowledge of values), occurs through the empirical workings of situated social actors. These social actors need to have access to the power and resources of a society that will enable them to make the changes they desire, and they will usually not do so uncontested. This accounts for why conditions associated with modernity appear unevenly in history and around the globe.
The reason that modernity is multiple is because the social activity of modernisation, when it occurs, is done in the name of implementing the ideals of modernity. But there is no set way in which these ideals can be implemented. As already mentioned, the relationship between the ideals of autonomy and self-mastery is an ambiguous one. But the implementation of even a single ideal can be imagined very differently too. American liberal democracy, Marxist communism, and European democratic socialism all have very different ideas about what “autonomy” means in practice. But they are all forms of modernity.
Sometimes, further, modernisation entails not only the deliberate effort to implement a modern ideal, but also a deliberate effort to undo prior modernisation efforts that implemented the ideal (or ideals) of modernity differently. Such efforts at transformation can also of course be aimed at attempting to clean up the unintended consequences of earlier modernisation attempts, for example where focus on one ideal might have to led to the other ideal getting implemented poorly or haphazardly.
Wagner (1994) refers to such intentional efforts at modernisation as “modernising offensives”. In this regard, I would say that the development and diffusion of digital technology throughout the globe, in both Western and non-Western societies, is part pf the process of such modernising offensive. The social agents are, currently, the tech elites of Silicon Valley. The ideal is autonomy. The expression of it is “empowerment through digital media”, envisioned perhaps as connectivity (Schmidt or Cohen 2014) or as access to near-infinite data (DuBravac 2015). There are almost certainly others. In any case, their extreme focus on autonomy has left the place of rational mastery wide open. This is a problem.
In this particular modernising offensive, there seems to be a presumption that rationality is inherent to the technology itself. Such presumption no doubt contributes to the claims by the modernisers that the technological transformations they envision are “inevitable” (Schmidt & Cohen 2014, p.261), something that “will happen regardless of which road we take (DuBravac 2015, p. xxii). The promise of empowerment through technology is so great that it will be sought after and implemented no matter what. It’s simply the rational thing to do.
Empowerment implies liberation of the self from the constraints of others, but it also implies the ability to constrain. The fear that I have is that the tech elite engaged in this “digitalising” modernising offensive have presumed that they don’t need to worry about how their commitment to autonomy might lead to problematic implementations of mastery in society. A longstanding critique of classical modernity is that the will to power over nature becomes a will to power over people (Horkheimer & Adorno 2002). In terms of this new “informational” or “digital” modernity, the digital modernisers – the tech elites of Silicon Valley – seem to express an interest in possible problems of people having mastery over people only to the extent that they presume digital media will liberate people from the old forms of social control.
Even if they do (which is far from assured), that doesn’t eliminate the possibility of new ways of exerting control coming about as an unintended consequence of a digitalising modernising offensive. Indeed, I suspect that they already are, in the form of new ways of manipulating both individual and collective attention.
The problem with much criticism of contemporary technology is that the proponents of it all too readily paint their opponents as Luddites. Since technology is also the core resource of their modernising offensive – both as the means of engaging in it and as the resource to be diffused, opposition to such an effort doesn’t just seem anti-technology, it seems anti-modern. A critique of this modernising offensive, in a society that values modernity, needs to perform that critique in the name of modernity too. But it needs to be in the name of a different image of modernity. It needs to be a modernising offensive of its own, one that can explicitly explain the intended nature its commitment to both autonomy and mastery, and can explicitly explain the means of reaching an accommodation between them. To be honest, currently I’m not sure if that’s even possible.
Primitive understandings of Harold Innis’ concept of “monopolies of knowledge” regard it as information-hoarding. What I think of as “vulgar Innisianism” treats such hoarding as deliberately motivated, initiated and maintained by an elite who intentionally use the monopoly to preserve their elite status. Both these simplistic assumptions greatly under-estimate the importance and explanatory power of the concept.
In an article dedicated to showing how the new concept of “deep links” in mobile apps isn’t actually all that new, Scott Rosenberg includes a discussion of how the nature of hyperlinks on the Web has changed, and how the Google search engine is heavily implicated in that change. It illustrates ways in which monopolies of knowledge can be conceived that go beyond primitive and vulgar Innisianism.
As noted in the article, the original idea behind hyperlinks was to create hypertext. Hypertext wasn’t a technological form so much as it was a concept. The concept was text that wasn’t linear and determinate. A reader could shift to and from different texts, backtracking and diversifying the trajectory of their reading at will. The hyperlink, as originally implemented via the protocols developed by Tim Berners-Lee, offered a way of partially implementing this notion. The hyperlink could be, and in the early days of the web often was, a way of linking a word or phrase occurring in the middle of a text to another document, or even another section of another document. Rosenberg notes the experience that this created:
Here’s the hardest thing to remember about discovering links at the dawn of the Web: They were fun. As journalist Gary Wolf put it in the lead of a 1994 Wired piece that introduced the Web browser Mosaic to a wide readership: “Mosaic is not the most direct way to find online information. Nor is it the most powerful. It is merely the most pleasurable way… By following the links — click, and the linked document appears — you can travel through the online world along paths of whim and intuition.”
James W Carey (Communication of Culture, pp. 148-9) provided a more complex idea of monopolies of knowledge that goes beyond mere information-hoarding. While acknowledging that one form of monopoly could refer to the hoarding of “factual information or data”, he claimed:
There is, however, a more stringent sense of the meaning of a monopoly of knowledge. When one speaks, let us say, of the monopoly of religious knowledge, of the institutional church, one is not referring to the control of particles of information. Instead, one is referring to control of the entire system of thought, or paradigm.
Monopolies of knowledge don’t just apply to information. If they can apply to control of entire systems of thought, they can refer to modes of knowledge that aren’t obviously informational. They can affect the answer to questions like “what is a hyperlink used for? And why?”, for instance. They can change the nature of know-how, and of interpretation of reality.
Rosenberg more or less argues that this is exactly what happened to the know-how associated with creating and understanding hyperlinks, courtesy of the Google search engine. And in doing so it centralised their power. Initially, Rosenberg claims, the “power” of hyperlinks resided in their ability to “subvert hierarchy”. But the Google search engine operated on a different assumption about what hyperlinks were: it “showed us that links could be read as signals of authority and value”. It basically redefined the answer to the question of what a hyperlink is for, and why.
And it managed to propagate that redefined answer. It did so because thinking of hyperlinks in this way was just so useful. It turned searching the web from a hit-and-miss affair to one where you can almost always find the most relevant result to your query, often with only the vaguest idea about how to formulate your query string. But in doing so, it removed the original conception of what a hyperlink could be:
Links suddenly weren’t so much fun any more. They stopped serving us as an alternative way of thinking about and creating informational relationships; they settled into a functional role. They became tools for navigating websites and pointers for sharing content on social networks. Finally, links became click-bait — transparent come-ons for traffic in an accelerating race to the bottom of our brainstems. We found ourselves arguing whether links help us see connections or just distract us or make us stupid.
Rosenberg touches on the vulgar version of Innisian monopolies when he points out that Google makes money because “they put a price tag” on links. But he offers a way of understanding a more expansive, and less vulgar, version of it, when he points out that Google, just by existing, has changed the default practice of hyperlinking and the default understanding of what hyperlinks are for, and that Google specifically did not intend to do this. They wanted to help people find information, not redefine what “hyperlinking” meant. Even so, that is what happened.
Like many of his core concepts, Innis never specifically defined what he meant by the concept of “monopolies of knowledge”. It falls to those who seek to build on his work to try not only to understand their potential applicability today, but how best to conceptualise them today so that they might best be applicable. By counting interpretive, pragmatic and other forms of knowledge, and by avoiding the temptation to always ascribe either the creation of a monopoly or its maintenance to deliberate intentionality, I think this concept of Innis’ can be applied much more fruitfully to the contemporary communication environment.
An example to consider: Facebook’s Internet.org project gets criticised because of the financial incentive that lurks behind it. A more pressing critique, one that sees the project as potentially creating problems not intended by the project creator, would ask if, simply by existing, Internet.org might transform the understanding of what the Internet actually is. My initial answer is that, without necessarily meaning to, it drastically changes the nature of the Internet into one in which the Internet is, for all intents and purposes, Facebook.
Facebook recently changed the way online profiles work in the event of a profile owner’s death. A new feature, not yet available in all countries, allows existing (living) users to add a legacy contact to their account. In the event of a profile owner’s death, the legacy contact will have a limited amount of control over the deceased user’s profile: they can add new contact to the Timeline, add new friends, and change the profile picture and cover photo.
For the most part this is simply an extension of the existing service that Facebook offered for memorialising accounts of the deceased. Previously, memorialised accounts could not be changed by anyone else. This in a few cases meant that profiles of a deceased person were not memorialised but were then controlled by someone else who knew the deceased user’s login credentials. The legacy contact feature would appear to eliminate the need for this somewhat awkward arrangement.
The theory driving the change, and Facebook’s memorialisation service in general, is that the profile of a deceased user is a memorial to that user. Memorialised accounts, according to the Facebook help page on memorialisation, are “a way for people on Facebook to remember and celebrate those who’ve passed away”. Similarly, a Facebook spokesman told Mashable that “Memorialization allows friends and family to post remembrances and honor a deceased user’s memory, while protecting the account and respecting the privacy of the deceased.” This implicit understanding of memorialised profiles is rendered explicit by a further tweak Facebook did to memorialised accounts: prepending “remembering” to the displayed name. Now it’s quite clear that a memorialised account isn’t the same as a Facebook account of a living person.
I find Facebook to be actually somewhat ahead of the curve when dealing with issues of thanatosensitivity in social software, or the need to take into account the unavoidable reality of death, largely because Facebook to run into the issues first given the extent and ubiquity of its service. Even so, I worry that these changes are being driven by the privileging of one possible model of understanding post-mortem profiles – that of a digital memorial – at the expense of others. In particular, many people treat post-mortem profiles as what I would call a post-mortem persona.
In quite a few cases, those with the authority to request the memorialisation of a deceased users’ Facebook profile will not do so, even when they are aware of the facility. Researcher Natalie Pennington, in her study of Facebook mourners, found that many of the people she interviewed, even when they knew of the memorialisation feature, “indicated that they were not interested in turning the profile into a memorial page…because the personal touches provided by the deceased that helps them to feel so connected were removed and left them less connected”. Similarly, Stephanie Buck, writing for Mashable, claims that “most users don’t raise a Facebook flag [for profile memorialisation] at all, choosing instead to peruse and interact with a person’s regular Facebook presence even after his or her demise”. Many, though by no means all, those who mourn on Facebook appear not to want a post-mortem profile to be a de-personalised memorial, but to in some way retain the uniqueness and individuality of the person as they existed in life.
To some extent, then, I suspect many mourners don’t want a stark division to exist between a person’s social media profile as it existed in life and as it exists after their death. This is quite in line with the “Continuing Bonds” theory of grief, in which grieving is presumed not to be a way of letting go of a relationship so much as renegotiating the relationship one continues to have with the deceased, even though the deceased themselves no longer actively plays a part in maintaining it. The official memorialisation process, including the new features of allowing legacy control of an account and emphasising the division between living and deceased people’s profiles, seems to me to work against those who wish to use the profile in this way: as a continuing, albeit post-mortem, persona rather than a new memorial, severed through memorialisation from the life of the person that went before.
This is complicated of course by the fact that there are other mourners, mentioned in Stephanie Buck’s article for instance, for whom the continuing presence of a “live” profile of a deceased person is an imposition and unwanted reminder of grief rather than an aid to mourning. Such people may prefer memorialisation or deletion of the account altogether. I guess the point is that for all that Facebook is trying to do the right thing, their proposed solution is too biased towards one possible view of what a deceased person’s profile actually is to others at the expense of another, possibly more common one.
Should a Facebook profile of a deceased person be a memorial and nothing more, or is it reasonable to treat it as an expression of persona even in the absence of a living person performing that persona?
Going back to some academic readings from the 1990s, it’s surprising just how much it was taken for granted that virtual reality was just on the verge of becoming the Next Big Thing [tm]. It’s also a reminder just how seriously people took the primitive, text-based communication environments of MUDs and MOOs, where people engaged in real-time, text-based interaction, including text-based construction (through description and narrative) of their “cyberspace environment”. The relationship is not coincidental, as MUDs and MOOs were often regarded, as Mark Poster put it in The Second Media Age (p51) as “transitional forms of virtual reality”.
Today, you’d be hard-pressed to find someone among the general population who’d even heard of these arcane online “environments”. If they could still be considered as the first steps into “virtual reality”, then their successors would be Second Life or MMORPGs like World of Warcraft. While these have enjoyed some measure of success, they’re a far cry from the “wave of the future” expectations that drew such interest in MUDs and MOOs. By far the most popular mainstream form of Internet-based media today is so-called “social media”. And those forms of media, like Facebook and Twitter, don’t conform to the model of a “virtual reality” “cyberspace” at all.
I think a lot of people studying the Internet in the 1990s assumed that the everyday experiences of time and space would simply be transposed into online media environments. There wasn’t much focus in 90s discourse on the possibility that online time-space experience might be radically different.
There were two academics in the 1990s who hedged their bets a little. Jay Bolter and Richard Grusin published a book in 1999 called Remediation. Their basic thesis was that all forms of media tended to be culturally defined in terms of other media. The distinctiveness of their theory was in their claim that, in contemporary Western societies at least, cultural processes of comparative definition always attempted to portray a particular media form as superior to other media in terms of two competing human needs: the desire f0r transparent immediacy, where the experience of using the media form felt like no media was involved; and the desire for hypermediacy, where the experience of using the media form brought the fact of its mediated nature to the forefront of attention. The different ways in which different media were defined in terms of whether they were better at providing transparent immediacy or hypermediacy constituted “the logic of remediation”.
Both transparent immediacy and hypermediacy are differently involved in different ways of creating an “authentic” experience. Transparent immediacy in media is claimed to achieve this through the assertion that a particular mediated experience is equivalent to unmediated experience and therefore “authentic” in that way. Hypermediacy is a little trickier. Bolter and Grusin suggest at one point that hypermediacy always involves a proliferation of heterogenous and fragmented content presented in media, such as you find in a contemporary multi-windowed graphical user environment. This supposedly creates a more “authentic” experience because, in multiplying “the signs of mediation”, the result “tries to reproduce the rich sensorium of human experience” (p34). I’m not fully on board with this description of it, though.
It’s fairly clear that the promise of virtual reality is its claimed superiority in providing transparent immediacy: a fully immersive 3-dimensional environment would be much more realistic than the flat, two-dimensional screens that characterise almost every other form of visual media. It’s not that hard either to see, just from looking at Facebook’s main interface, that there’s a logic of hypermediacy going on: disparate pieces of information are being bundled together in a way that would be impossible without Facebook’s mediation. At its most basic, the dominance of social media over (current efforts at) virtual reality suggests that in the logic of online remediation, the desire for hypermediacy has beenstronger than the desire for transparent immediacy, contrary to the beliefs of people studying the Internet in the 1990s. But I think it’s somewhat more complicated than that. This is because I don’t think Bolter and Grusin got their “hypermediacy” concept quite right.
They skirt the edges of what hypermediacy could actually mean very late in their book, when they suggest that “the logic of hypermediacy” entails the “crowding together of images, the insistence that everything that technology can present must be presented at one time” (p269). The “proliferation”, “heterogeneity” and “crowding together” of hypermediacy, I would suggest, is just one of a number of ways in which media “hypermediate” the experience of time-space itself. Where “transparent immediacy” refers to a mediated experience of time and space that is identical to an unmediated one, hypermediacy refers to a mediated experience of time and space that is only possible through mediation. This could refer to a “crowding together” of images, as Bolter and Grusin suggest. However, it could also refer to, for instance, the placing of Friend activity in reverse chronological order on a Newsfeed, so that the reader’s temporal experience of reading them can be the exact opposite of how it would occur if it was unmediated. “Proliferation” isn’t the core of “hypermediacy”. The re-ordering of spatial and temporal experience is.
By adopting this perspective on hypermediacy, the “logic of remediation” can be considered as less of a generic quest for “authenticity” in mediated experience, and more as a way of considering what kinds of ways a particular culture wants to re-orient (or preserve) experiences of time and space, according to how those attempts are instantiated in that culture’s dominant media forms. This goes beyond just the dualism of immediacy/hypermediacy, and into specifics of just how and why certain specific re-orientations of time and space occur. With any luck, such considerations might reveal something fundamental about the culture at issue.
Just why does Facebook re-orient time-space experience in the way that it does?