Home » Reflections on Culture

Category Archives: Reflections on Culture


The binge-watch: a viewing practice against the digital stereotype

With the launch of Netflix in Australia, a somewhat new form of involvement with media will likely become much more widespread: the binge-watch.

Binge-watching a TV series simply means watching every episode of a multi-episode TV series in one sitting. I suspect that it only became a common phenomon over the last ten years or so, as the ability to acquire entire seasons’ worth of a TV show became cheaper and easier, and the increasing tendency in TV for season-long arcs in their story-telling, so that viewers are better off watching all prior episodes of a show before they watch a new one.

Of interest to me is the way that binge-watching to some extent bucks the general trends associated with media use in the so-called “digital age”. True enough, as Manuel Castells points out in the new preface to the 2010 reprinting of his seminal “network society” trilogy, television programs themselves are increasingly watched on computers, or even mobile devices these days, and are increasingly done so at the time and place of a viewer’s choosing, not at a time pre-programmed by a TV station. Castells can reasonably say that “The Web has…transformed television” (p. xxvii) on this basis, on the consumder end mainly by making the act of watching a more individualistic, less communal experience. It’s notable here how such a transformation deviates from the claimed overall thrust of the “digital transformation”.

Ever since the 1990s, a strong, recurring argument made in favour of the Net over television, or “new media” over “old media”, was that the Net and associated new media were fundamentally, in their essence, interactive. This was not only a qualitative distinction, but according to its proponents, an empowering one. No longer would we have to sit by and passively absorb the content of mass media, as “Second-Wave” institutions and their ossified standards of standardisation gave way to the new and vibrant “Third-Wave” world of de-massification. With the new media regime, we would regain the control over the creation of our own culture that massification of the media in the early twentieth century had stolen from us. And yet, here we are where one of the newer modes of involving oneself with a cultural product is, at least during its consumption, a version of the same pre-Net media configuration that, if anything, intensifies many of those aspects of it that the proponents of the new interactivity disliked: a message broadcast by a unitary producer to a receiver who can’t talk back, and who consumes in isolation from other consumers.

Granted, there’s plenty of person-to-person interactivity these days that goes on around TV viewing after its consumption, via online fan communities and the like, but the experience on which that person-to-person interactivity depends is still one which doesn’t fit the version of interactivity normatively that to do this day is still to some extent promised by “new media” in the “digital age”. Binge-watching, in itself, is consuming, not “prosuming”, let alone “produsing”.

That said, the practice of binge-watching also violates a claim about the trajectory of media development that is critical of “new media” developments. Within the academic literature, there’s the occasional claim that digital technologies transform communication away from an emphasis on dialogue and meaning and towards an emphasis on mere connectedness. Vincent Miller frames this in terms of digital technologies becoming increasingly oriented towards “phatic communication”, or communication aimed at performatively indicating a social connection, rather than aimed at dialogue or exchange of ideas. He sees this orientation away from actual information exchange as including an orientation away from concern about the commodification of information by media platforms, and worries about the consequences of that.

Miller associates the trend towards an increasingly phatic orientation to communication with the trajectory of media forms towards transmitting ever-shorter amounts of text, from blogging to social networking through to micro-blogging. He sees this trend as a core part of “digital culture”, arising in part because the demands of “connected presence” make “the time-saving role of compressed phatic communication” much more important (p. 395). Yet the tendency of binge-watching isn’t the compression of time spent watching TV. Binge-watching is a significant increase in the time watching, as compared to earlier TV use. In fact, it’s not all that uncommon to hear people speak of 12-hour marathons of watching a television show. Sure, they’re presumably taking meal and toilet breaks in there somewhere, but still…

It may seem like a simple point, but it can be very easy to get caught up in the rhetoric of the “digital age”, much of which implicitly or explicitly claims a clear, single trajectory to the development of digital media which uniformly applies to all media types and media experiences. As the phenomenon of binge-watching shows, this isn’t true. Consumption of television may indeed have transformed in the wake of the Web, but that transformation seems to me to be more of an intensification of the original television experience rather than, say, the shift towards choose-your-own-adventure style interactivity in TV story-telling that some of the more pious zealots of the New Media Revolution(tm) were imagining in the early 1990s. And that isn’t necessarily a bad thing.


How the Facebook Manipulation Experiment Showed the Importance of “Algorithmic Neutrality” to Social Media by Violating it

It’s been a few days now since online outrage started over a recently-published study involving Facebook. The study involved the manipulation of people’s Facebook News Feeds in order to assess the possible existence of a phenomenon of “emotional contagion”. Basically the research entailed slightly changing what showed up on nearly 700,000 people’s Facebook News Feeds. The amount of “positive” or “negative” messages displayed on people’s News Feeds was systematically varied, in order to see if this had any effect on how many “positive” or “negative” messages were subsequently posted by the people who read those News Feeds. The study found that there was a small but statistically significant positive correlation between the overall emotional tenor of the messages a person saw in their News Feed and the overall emotional tenor of their own Facebook posting. But the findings of the study is not what so many people have found so disconcerting about it.

The source of outrage seems to be more around the way in which the study was performed. But the actual details of what is so problematic about the study’s ethical and methodological choices is not always well expressed. It doesn’t help either that there seem to be several issues of concern at stake: issues of informed consent in experiments, issues of privacy, issues of when it is or isn’t ethically acceptable to manipulate someone.

It’s that last one that interests me, and it’s the one that I think makes most people outside of the Silicon Valley tech-culture bubble intuitively feel that a study that manipulates Facebook News Feeds is somehow “creepy”, even if they may have trouble putting the reason for their feelings in words. Rather than denigrate this intuitive reaction as somehow the product of an “online tantrum industry“, it might be more worthwhile to investigate why that feeling occurs, and see if it’s justified. I believe that this intuition stems from an unstated assumption about social media and web 2.0, an assumption so ingrained that it’s only when that assumption got violated in this study that it could become clear that it was an assumption.

Call it “algorithmic neutrality”. The idea of algorithmic neutrality is that, where a software service chooses what to show or not to show an individual, as already occurs in Facebook’s News Feed, this decision-making process is (a) automated, and (b) the automation is implemented in exactly the same way for each individual user. This is true even when services are “personalised”: in algorithmically neutral personalisation of services, the process of personalisation is algorithmically determined, not by any manual inspection of what an individual person does or does not do on that service.

Most contemporary social media services don’t make a point of indicating algorithmic neutrality because it’s almost always just assumed. In fact it’s only if algorithmic neutrality is assumed that it becomes possible for the “revolutionary” rhetoric of social media to make any sense at all. Social media, according to this rhetoric, is “democratising” and “empowering” precisely because no government or corporation stands in the way of masses of ordinary people communicating and coordinating their activities via social media platforms like Facebook. Implicit in this assumption is that Facebook, for example, isn’t involved in altering the content of what people see in any way that might be subject to political manipulation.

This is why the study comes across as creepy, even though Facebook routinely configures what appears on people’s News Feeds. The routine configuration is algorithmically neutral, automatically processed according to the pre-set, universally applicable rules of the EdgeRank ™ algorithm. The manipulations in the study were not.

By making intentional, un-algorithmically determined alterations to Facebook News Feeds, the Facebook emotional manipulation study violated the principle of algorithmic neutrality. Furthermore, in doing so it contradicted a foundational assumption of what supposedly makes social media politically revolutionary. It’s fascinating to watch defenders of this study make this exact point about the non-revolutionary nature of social media, essentially saying that Facebook is a media business just like any other media business, and any changes to their informational product (not yours), such as manual alterations in the way News Feed data gets displayed, are just like any other media business changing their product. New media, in this framework, is just as corporate-owned and corporate-ruled as old media.

Is social media really “revolutionary”? I’d say that it still could be, to the extent that algorithmic neutrality can be adopted as an ethical principle. But that also raises the question of whether a media business can commit to algorithmic neutrality and still remain profitable in the face of competitors who might decide not to be so ethical. There’s also the question of whether “algorithmic neutrality” is really possible, as there’s a small but burgeoning amount of literature that an algorithmic implementation can never be politically “neutral”, but is always governed by the political circumstances of its implementation. And then there’s the question of how exactly, in the absence of detail of a company’s inner workings, one can determine whether a company is really being algorithmically neutral or not. In any case, I think that the default assumption that social media companies are automatically algorithmically neutral has now been both exposed and demonstrated as an unwarranted assumption by this study. And this is the main reason why the study is intuitively “creepy”, at least within the context of how people, bombarded with the message of how social media is “revolutionary”, “democratising”, and “empowering” as they are, previously expected social media services to operate.


Game of Thrones Spoilers is Serious Business

So in theory, the contemporary media environment often makes it possible to watch TV shows at one’s leisure. In between online streaming and innumerable recording options for broadcast TV, there’s not much obstacle to any type of time-shifting – technologically speaking. And even if the legally available options don’t suit, the illegal options are plentiful, and it’s almost laughable how little their illegal status has impeded their uptake and use. So there’s no actual obstacle to choosing to view a television series in one’s own time and at one’s own pace – technologically speaking.

Thus,  Manuel Castells has claimed in the preface to the 2010 edition of his seminal work “Rise of the Network Society” (p. xxvii) that many of those born in the 1990s or later “do not even understand the concept of watching television on someone else’s schedule”. Of course, copyright law impedes this, but it doesn’t fully prevent it, as the ongoing popularity of sites like The Pirate Bay demonstrates. But even so, there is an impediment to  effective viewer time-shifting. It’s a cultural one. It’s spoilers.

For all that people in theory can delay watching a show technologically, this means that they risk finding out about what happens from others before they get a chance to see it. This risk increases the more popular a show is.

A friend of mine in America had been putting off watching  Game of Thrones episodes as they air because he’d been intending to wait until all the episodes of this season had been broadcast and then binge-watch the whole season. This plan met an obstacle when he was introduced to information about the most recently aired episode by someone presumably unaware of this plan. It’s likely significantly reduced his planned future enjoyment of the series.

I’m a little fascinated by just how seriously people take the issue of spoilers for Game of Thrones – more seriously than the average serial drama. I also find myself largely concurring with it (hence no comments allowed on this particular blog post, just in case). It’s basically an outcome of a situation in which episodes of television series still have an initial airing date, and a set amount of time between the airing of each episode. The result seems to be a sort of social and cultural engagement by fans with the content of an episode as soon as it airs, continuing up until the airing of the next episode. Failing to watch an episode right away requires distancing oneself from an important aspect of the cultural engagement with the series as it is performed collectively by a 21st-century television “audience” (or possibly “public”, to indicate that the idea of television watchers being passive, like an audience, is passé). Isolation is never easy.

Spoilers are an attempt to remedy this. In theory, by labelling online content as “containing spoilers” fans can engage with the rest of fan culture while knowing to avoid certain things that have been given the appropriate spoiler label. In practice, it’s quite imperfect. Even without “griefers” who take sadistic pleasure in intentionally spoiling, there’s still a chance of carelessness, error or simple disregard of the norms around not spoiling.

So even with the practice of labelling spoilers in effect, refusing to watch an episode as soon after it airs as possible is a risky proposition. In the case of my friend, the norms of spoiler labelling were either not fully understood, or there was an unwarranted assumption that a fan of the show would not wait overlong to watch a new episode of a series after it airs for the first time.

The practice of spoiler tagging is a way to account for differences between what technology and broadcasting schedules can achieve in determining the available time that a show can be viewed, and the actual processes of fan engagement with the material. As such, it’s quite a serious fan issue. I wonder if the aca-fan scholars have engaged with it?


Youtube video: Kids React To Walkmans

So is the gulf between 1980s music technology and 2010s music technology really so much more notable than other 30-years gaps between music technology, or are people who were teens in the 1980s simply starting to hit their nostalgic phase of life?

“Just text Me” by Keisha, and the message in the medium of texting

This amateur music video has been getting a bit of circulation:

Let’s take a moment to recognise that text messaging has become so embedded in our culture that a simple song from a guy dressed as a woman making Seinfeld-like observations in rap about the etiquette of texting someone versus calling them can achieve a relatively significant amount of collective resonance.

That said, what exactly is this song really saying?

The point seems to be one of effective time management, and how the hapless person-who-always-calls-and-never-texts makes that rather more difficult for the person on the receiving end of their “unnecessary calls”:

You don’t see it as a sign that you’re the only one that calls me and at the worst times.


it’s just that you’ve already wasted a minute of my time when I could’ve been doing something else.

There’s also a clear undercurrent that the poor subject of the song should “get with the times” and make the effort at understanding how to use the technology “properly”. At the very beginning you can catch a glimpse of previous texting on the main singer’s phone, and the girl she sings about has sent a self-pic to the main singer, immediately followed by the message “sorry, accidental selfie”. There’s a not-too-subtle suggestion that the person who always calls and never texts is (a) stupid, and (b) bad at using technology. These two facts may be related.

The assumption is that texting has made exchange of simple information easier and less burdensome, and that this is a good thing, at least so long as some fool doesn’t consistently fail to make use of the convenient new facility. This is the sort of default assumption that I think Marshall McLuhan was seeking to question in much of his work. His idea, radical at the time he first articulated it, was that assumptions about how media were presumably configured by human thought and behaviour tended to obscure the much more interesting process of how human thought and behaviour were configured by media.

So instead of seeing texting as a better way of exchanging simple coordinating information like “do you want your coffee hot or iced?”, McLuhan would have us investigate the way in which the existence of texting may create the belief that we should economise some of our communications. It’s not just that certain information is automatically better-suited to being texted, it’s that communicating this type of information with friends can now be understood as separable from the more affect-based component of communication as a means of building and maintaining relationships. And because this is now possible there’s this new idea that friends ought to do it wherever possible so as not to disrupt their friends’ time management unless, as the singer says early on, it’s something “much more important”.

Of course, McLuhan was often criticised for his basically non-existent consideration of the importance of non-media aspects of social life like, for instance, economics and capitalism. I believe it was the New Left of the 1960s who coined the derogatory term “McLuhanacy” to describe his philosophy  and its lack of concern for the economic underpinnings of society, something the New Left regarded as central in importance. So I wouldn’t say it’s texting alone that contributes to a sense that wasting friends’ time with “unnecessary” voice communication is bad. The scarcity of time obviously plays an important part.

But I do think the existence of texting has generated assumptions about appropriate and inappropriate uses of time in communication given time’s scarcity, and those assumptions might in turn affect the value given to certain types of communication based on how the process of communication in question need to be configured in time, both individually and collectively. The guaranteed control and rapidity of information sharing via text seems more valuable than the emotional depth available in voice communication, with its relative slowness and need for temporal synchrony. It’s only when there’s certainty that communication will have an emotional component –  when it’s “important” – that voice is preferable to text.

Or, to put it another way, every text message is a closing off of the possibility of an unexpected emotional experience in communication. And many people will readily sacrifice this  possibility for the certainty of being able to control their own schedule.

How Taboo is Facestalking?

“Facestalking is the act of reviewing in detail another person’s Facebook page to follow their activity without necessarily engaging in any form of communication with the person.” This is how Kirsty Young defines facestalking on page 26 of her (publicly accessible) journal article, “Social Ties, Social Networks and the Facebook Experience“. It’s a good first effort at a definition, and Young’s research suggests that this activity of monitoring without engagement is quite common: 67% of her 758 survey respondents said they used Facebook to “follow what is happening in the lives of others”.

I think Young’s research has glossed over an important distinction, though. There’s a difference between monitoring the activity of a Facebook Friend, and monitoring the activity of  someone who is not a Facebook Friend, and is probably never going to become one. “Facestalking”, or variants on the term, seems to be used as a description specifically for the latter situation more often than not, in my experience.

The existence of the distinction, and why this narrower version of “facestalking” is perceived as problematic rather than “generally positive” (as claimed by Young) , is suggested in an open-ended response to one of Young’s survey questions. In response to the question of how a survey respondent would feel if their Facebook profile ceased to exist, one of them said that they would feel “relieved that my ability to stalk other people and look at their lives (eg. ex-boyfriends, etc) is over because that part of Facebook I have issues with and felt slightly guilty about doing”.

Young felt that the guilt was unwarranted given that it was quite common for people to use Facebook to monitor the activities of others. I think Young missed the importance of the fact that monitoring the activity of ex-boyfriends was given as an example of guilt-inducing “facestalking”. I can hazard a guess that this felt problematic because the survey respondent and the ex-boyfriends in question were no longer a part of each other’s social lives – but were still accessible to each other on Facebook.

How common is it to look at the Facebook profiles of people who aren’t in your social life, maybe even tracking their profiles as an ongoing matter? I suspect it occurs far, far more frequently than people actually admit. And why shouldn’t it? People like to know about other people, and the information is there to be read.

There shouldn’t really be any stigma attached to it according to the dominant ideas about privacy and “publicness” of the day: if a profile makes some details public, then it should come as no surprise that members of the general public are going to be able to see those details. And yet, I can’t think of a situation where two people have made unplanned mutual contact for the first time, and one of them casually mentions that  they’ve already had a good look at the other person’s profile. Even if they have. Especially if they have.

So it seems to me that there’s a taboo on this kind of facestalking. Or possibly it’s just taboo to admit to it. Or perhaps it’s taboo to treat it like it’s not something that’s really weird to do. I’m not sure exactly. And I’m not sure as to why such a taboo exists. At the very least I think it suggests that, even in today’s supposedly super-connected world, the appearance of disinterest in people that aren’t in your social life is expected. Even when available technology makes taking an interest easy. Especially when available technology makes taking an interest easy.

What’s the nature of the taboo around facestalking? How strong is it? How strong should it be?

WarGames, computing and thinking

As movies go, the 1983 hacker-thriller WarGames is something of a cultural icon. Its (relatively) realistic depiction of hacking captured the minds of a whole slew of teenaged hackers-to-be, and serious news reports about computer break-ins frequently referred to the movie directly (one example here).

As films for cultural study go, WarGames offers quite rich pickings. Right now, I want to focus on the way in which the movie frames the relationship between computers and thought. There are two, somewhat contradictory frames at play. I’ve embedded two clips from the movie to demonstrate.

First up is the scene in which the WOPR (War Operations Plan and Response) computer is introduced.

The bit that interests me the most happens at approximately 0:26. Mr Paul Richter says “the WOPR spends all its time thinking about World War 3”.

“Thinking”. That’s an interesting word choice.

Despite the WOPR being described in such anthropomorphic terms, there’s a clear concern as the clip plays out at the thought of critical military decisions being entrusted to a mere “machine”. The decision to place control of missile launch with the WOPR directly (taking the men “out of the loop”) is made only when assurance is given that control over the situation will remain with humans (albeit only with the humans “at the top”).

In the second scene, the young hacker David Lightman encounters “Joshua” (a.k.a the WOPR) for the first time:

David is quite happy to point out that the computer “will ask you whatever it’s programmed to ask you”, and that its “voice” is nothing more than a translation of electrical signals into speech. Yet at 1:17, his reaction to the computer’s request to play a game is one of sheer endearment, such as you might give a precocious child making a request to play with you. Then Jennifer suggests that Joshua has “missed” his creator.

In the WarGames film, computers are at one and the same time nothing more than extensions of human will, as well as entities that can and do display a certain level of sentience or independent thought. The tension between these two competing understandings of what computers are is what, in my opinion, drives quite a lot of the movie.

The perception of the relationship between computing technology and human thought is one that I think might be of some importance to my thesis. Nowadays it seems generally accepted that computers are servants of humans rather than quasi-sentient entities in their own right. Although that said, might it not be the case that humans are actually the unwitting servants of computers?

That’s the premise of the film The Matrix, whose commentary on the relationship between computing, thought, and now perception, is a whole issue in its own right. Best to stop writing now.