The Wollongong antivaccination PhD thesis: an informal assessment of its social science content
The University of Wollongong appears to have awarded a PhD for a thesis to that argues against the health benefits of vaccination. Written by a person named Judy Wilyman and entitled A critical analysis of the Australian government’s rationale for its vaccination policy, it’s already garnered a lot of criticism for being antivaccine-pseudoscience. As many have pointed out, this thesis wasn’t submitted for a PhD in any natural science field (let alone one that might require actual medical knowledge about vaccines), but under the University of Wollongong’s humanities banner. It’s the responsibility of the School of Humanities and Social Inquiry, to be exact.
There is actually a tradition of studying science itself as a social practice rather than a philosophy or method of gathering knowledge. Actually, there are several. There’s the sociology of scientific knowledge (SSK), which later merged with general technology studies, to become Science and Technology Studies (STS), and there’s also a field calling itself the political sociology of science, which I’ll get to shortly.
There’s been some controversy over studying scientific practice in this way, as the research framework for these studies often implicitly or explicitly claims that the truth claims of science are relative, or at least not as divorced from social context as proponents of science usually claim. I really hope that the publication of this PhD thesis doesn’t suggest that any social inquiry into science should automatically be disregarded..
While I do think it’s important to demonstrate errors that Wilyman makes around the state of scientific knowledge regarding vaccination, I also think it important, for the sake of the integrity of the field of social science and the humanities generally, to determine whether this thesis lives up to that field of knowledge’s standards as well. As a humanities/social inquiry student, it’s in my own self-interest.My own, admittedly far from well-educated, assessment of thesis is that it fails those standards.
Though it isn’t really my area of expertise, I’ve tried to trace down some of the sources this thesis used, specifically from the academic literature around social study of science, to see how well it was used. Most this seems to show up in chapter 8 of the thesis, where Wilyman offers a definition of ‘unsound science’. This concept comes from peer-reviewed literature. But from my reading of that literature, Wilyman is using the concept incorrectly.
As pointed out by others, several of the books Wilyman relies on are not of an academic standard: they are polemics rather than well-argued assessments. I won’t address those. In chapter 8, sources that do have academic credibility in the social study of science include Hess (2007;2009), Gross (2007), and Frickel et al. (2010). Together, these sources articulate the concept of ‘undone science’ that Wilyman uses to claim that scientific research on vaccination has been compromised. I think she’s misusing it.
The term ‘undone science’ comes from a newish field of study, called the political sociology of science. According to Hess (2009, p. 309), this field of study “draws attention to the politics of research agendas and the ways in which choices about scientific knowledge are outcomes of broader societal conflicts and coalitions involving not only research communities but also governments, industries and social movements”. The approach doesn’t embrace the utter scientific relativism of, say, the Strong Programme of SSK, but it does insist that scientists’ decisions about what should be researched, and why, can’t be divorced from “unscientific” influences like availability of research funding, and opportunities for recognition and prestige from peers. Scientific progress, in this framework, isn’t linear. It’s opportunistic. And there are areas of research, entire fields of potential scientific inquiry, that not only aren’t pursued due to the existing political incentives around actual scientific practice, but are presumed explicitly not worth pursuing. This is ‘undone science’.
I need to stress another aspect of this concept of undone science as it appears in the literature, precisely because Wilyman doesn’t mention it at all. Fricken et al (2010, p. 444) describe undone science as “areas of research that are left unfunded, incomplete, or generally ignored, but that social movements or civil society organizations often identify as worthy of more research” (emphasis added). Hess (2007, p. 22) similarly treats ‘undone science’ as primarily an issue brought to light by social organisations: “from the perspectives of..activists and reform-oriented innovators, the science that should get done does not get done because there are structures that keep it from getting done”.
There’s a certain amount of relativism in these statements. What they are not saying is that there’s an objective, universally understandable idea of “science”, which universally serves the public interest, but which gets distorted by institutional interests. Rather, is says that the entire field of science is internally divided by “relations of co-operation and conflict among advocates of different conceptual frameworks, research methods and problem areas” (Hess 2007, p. 27) and externally ‘aligned’, influenced but also able to influence, policy makers and research funders (Hess 2007, p. 44), all while under the scrutiny of exernal actors such as civil society movements (Hess 2007, p. 43). The relationship between all these stakeholders is considered fairly complex, and ripe for empirical study.
All this nuance is utterly lost in Wilyman’s work. She describes undone science as simply “research that is not conducted because institutional barriers are constructed in the political process to prevent it from being done” (Wilyman 2015, p. 195). Her model of undone science is one in which it only appears if “political barriers arise” because “the interests of political, economic and industrial leaders synergise to control the direction of funding for scientific research” (Wilyman 2015, p. 196). This, she claims “occurs at the expense of public interest science”. She presumes that this alleged collusion between government, industry and academia leads to public policy which “select[s] against some areas of science” (Wilyman 2015, p. 198). She then quotes Hess (2007, p. 21) out of context to suggest that because “most politicians do not have an in-depth understanding of scientific issues..the legitimacy of political outcomes therefore depends upon the values inherent in the production of science and in the use of science that has been accepted by all stakeholders” (Wilyman 2015, p. 196).
Hess did indeed point out that policymakers lack scientific knowledge. But he pointed this out to show how it was possible for scientists to escape from constraints on the autonomy of scientific practice. While those who fund research can fund it on the basis of what the funders rather than the scientists want, funders’ lack of technical proficiency means that “they can, to a certain degree, be told what they want” (Hess 2007, p. 44). And rather than an ideal of science which gets perverted to create ‘undone science’, as part of general problem of “selective science”, the political sociology of science approach seems to treats scientific practice as always partially agonistic, and in which there is always some form of undone science in existence.
The case studies of Fricken et al. (2010) seem to bear this out. In their description of a dispute between industry and NGO groups about the viability of a “chlorine sunset”, they illustrate the “paradigms” of research of both the industry groups and the NGOs. Both contain identifiable ‘undone science’, which tend to reflect one another. Illustrating the potential use of such an approach, the paradigms at issue (“risk” vs “challenger”) describe the political claims made by each group about how research should be performed. The claims at issue are political, not scientific, because they are based on the assumption of what will be found by research before any research has begun: one group assumes that testing of individual chlorine compounds for environmental impact is enough, the other assumes that the class of chemicals as a whole is problematic, and needs to be restricted until each individual one is proven safe. Such a political question is likely intractable, but including the dimension of ‘undone science’ may help clarify it somewhat.
Further, in contrast to Wilyman’s characterisation of a “synergy” of powerful institutional actors ganging up to work against the public interest of the, er, public, Fricken et al (2010) point out that civil society groups can and do act as a break on specific areas of scientific research. An area of research where they found scientists refusing to engage in research, precisely because of the pressure from outside groups, was research involving animal testing. Many scientists, according to the case study they examined, deliberately steered well clear because of the “terrorist” activities of animal rights activists that they feared they would experience. Fricken et al. suggested that stem cell research is similarly steered away from by some scientists due to the activities of right-to-life advocates.
All this is to say that undone science as an academic concept relies on a lot of paradigmatic assumptions about science that Wilyman does not adopt and directly contradicts. The interesting possible relations between partially co-operative and partially antagonistic, partially determined and partially autonomous, elite social groups and science practicioners, is reduced to a morality play between the virtuous public interest that pristine (not “selective”) science serves and the evil profit motives served by villainous governments and industries, and their totally subjugated scientist lackeys. She makes use of Hess’ claim that “funding claims what can be done and what will be done as well as what remains undone”, but utterly ignores his warning that “this argument can turn into a simplistic, externalist form of economic determinism” (Hess 2007, p. 32). In Wilyman’s thesis, that’s more or less exactly what happened.
It would be interesting to assess Wilyman’s own work by the standards of political sociology of science. It’s not my field, though: any errors in the above are mine, not the respectable academic authors I’ve quoted. I would like to point out, however, what seems to be the fundamental political orientation that undergirds Wilyman’s whole project. It appears on the last page of the conclusion: “Healthy communities are achieved by increasing individual autonomy, that is, the individual’s right to choose how they care for their own bodies in the prevention of disease. This prevents indoctrination and it must be respected and promoted in public health policies that ensure better health is the primary outcome of these policies” (Wilyman 2015, p. 308).
Wilyman’s axiomatic assumption is that health is achieved first and foremost by retaining personal autonomy. It isn’t achieved by, say, valuing health expertise and the knowledge associated with it. The political value of freedom comes before everything else, and the exercise of this freedom – by refusing to participate in building up herd immunity through mass vaccination, for example – can axiomatically never be unhealthy for others. Any science that says otherwise must be wrong.
References
Wilyman, J., 2015, ‘A critical analysis of the Australian government’s rationale for its vaccination policy’, Doctor of Philosophy thesis, School of Humanities and Social Inquiry, University of Wollongong, 2015. http://ro.uow.edu.au/theses/4541,accessed 15 Jan 2016
Typing out Loud: digitalisation as a problematic modernising offensive
What is “modernity”? Many social and political theorists use this term to describe the outcome of significant changes in West – the Enlightenment, the French and American Revolutions, the Industrial Revolution – that separate them from “pre-modern” conditions. A frequent interest of modernity theory is the possibility that these kinds of changes are now going global, and what consequences this has for both the world, and for the concept of “modernity” itself.
The concept is certainly problematic already. The claim to the status of modernity, of “being modern”, is implicitly a claim to an inherent superiority. This holds even if modernity is becoming a global phenomenon, since “the West” still claims to be the original source of modernity. Moreover, where the concept of being modern in the West usually entailed ideas of social and economic progress, this progress was usually bought at the cost of the conquest and exploitation of non-Western peoples. The history of European and Western “modernity” is equally a history of international colonialism and imperialism (Knauft 2002)
Contemporary modernity theorists are generally aware of these problems in varying degrees, and attempt to take them into account. In so doing, the moral weight of the concept of modernity of course becomes ambiguous. While most modernity theorists, in contrast to theorists of postmodernity, retain a sense that the “project of modernity” (Habermas 1983) still has moral goals that are conceivably of universal applicability to humanity, they now treat the possibility of identifying these goals and actually implementing them as much more fraught and ambiguous than was suggested by the more naive views of “progress” held by European and Western social and political theorists of the 18th through 20th centuries.
As a consequence, modernity theorists often claim that there is no single model of modernity. Different societies and cultures can be modern in very different ways. Some take this to further suggest that Europe itself has experienced different kinds of modernity in its history. Wagner (1994; 2013) is one such modernity theorist. Moreover, he attributes the existence of multiple modernities to an ambiguity inherent in the moral values underpinning modernity itself.
Wagner (2013) claims that there are two main ideals that underpin any expression of modernity: autonomy and rational mastery. The relationship between these two ideals is inherently ambiguous: increasing autonomy often works against the possibility of mastery by some; increasing mastery often works against the possibility of autonomy of some. Both the expression of these two ideals, and the way they are accommodated (or not) to each other, can and does vary tremendously. A modern collective experiences crisis when the accommodation between these expressions of ideals breaks down, which they all too frequently do. Wagner (1994) characterises the history of modern Europe as a succession of types of modernity, each new one implemented in response to the crisis of the previous one.
Wagner’s theory of modernity is also distinctive in that he doesn’t entirely treat the process of modernisation as the ongoing and continuous work of completely impersonal forces. Rather, he claims that modernisation is partial, sporadic, and can often change course. This is because he claims that modernisation is the product of the actions of social agents. Modernisation, as a transformation of social structure and of knowledge (including knowledge of values), occurs through the empirical workings of situated social actors. These social actors need to have access to the power and resources of a society that will enable them to make the changes they desire, and they will usually not do so uncontested. This accounts for why conditions associated with modernity appear unevenly in history and around the globe.
The reason that modernity is multiple is because the social activity of modernisation, when it occurs, is done in the name of implementing the ideals of modernity. But there is no set way in which these ideals can be implemented. As already mentioned, the relationship between the ideals of autonomy and self-mastery is an ambiguous one. But the implementation of even a single ideal can be imagined very differently too. American liberal democracy, Marxist communism, and European democratic socialism all have very different ideas about what “autonomy” means in practice. But they are all forms of modernity.
Sometimes, further, modernisation entails not only the deliberate effort to implement a modern ideal, but also a deliberate effort to undo prior modernisation efforts that implemented the ideal (or ideals) of modernity differently. Such efforts at transformation can also of course be aimed at attempting to clean up the unintended consequences of earlier modernisation attempts, for example where focus on one ideal might have to led to the other ideal getting implemented poorly or haphazardly.
Wagner (1994) refers to such intentional efforts at modernisation as “modernising offensives”. In this regard, I would say that the development and diffusion of digital technology throughout the globe, in both Western and non-Western societies, is part pf the process of such modernising offensive. The social agents are, currently, the tech elites of Silicon Valley. The ideal is autonomy. The expression of it is “empowerment through digital media”, envisioned perhaps as connectivity (Schmidt or Cohen 2014) or as access to near-infinite data (DuBravac 2015). There are almost certainly others. In any case, their extreme focus on autonomy has left the place of rational mastery wide open. This is a problem.
In this particular modernising offensive, there seems to be a presumption that rationality is inherent to the technology itself. Such presumption no doubt contributes to the claims by the modernisers that the technological transformations they envision are “inevitable” (Schmidt & Cohen 2014, p.261), something that “will happen regardless of which road we take (DuBravac 2015, p. xxii). The promise of empowerment through technology is so great that it will be sought after and implemented no matter what. It’s simply the rational thing to do.
Empowerment implies liberation of the self from the constraints of others, but it also implies the ability to constrain. The fear that I have is that the tech elite engaged in this “digitalising” modernising offensive have presumed that they don’t need to worry about how their commitment to autonomy might lead to problematic implementations of mastery in society. A longstanding critique of classical modernity is that the will to power over nature becomes a will to power over people (Horkheimer & Adorno 2002). In terms of this new “informational” or “digital” modernity, the digital modernisers – the tech elites of Silicon Valley – seem to express an interest in possible problems of people having mastery over people only to the extent that they presume digital media will liberate people from the old forms of social control.
Even if they do (which is far from assured), that doesn’t eliminate the possibility of new ways of exerting control coming about as an unintended consequence of a digitalising modernising offensive. Indeed, I suspect that they already are, in the form of new ways of manipulating both individual and collective attention.
The problem with much criticism of contemporary technology is that the proponents of it all too readily paint their opponents as Luddites. Since technology is also the core resource of their modernising offensive – both as the means of engaging in it and as the resource to be diffused, opposition to such an effort doesn’t just seem anti-technology, it seems anti-modern. A critique of this modernising offensive, in a society that values modernity, needs to perform that critique in the name of modernity too. But it needs to be in the name of a different image of modernity. It needs to be a modernising offensive of its own, one that can explicitly explain the intended nature its commitment to both autonomy and mastery, and can explicitly explain the means of reaching an accommodation between them. To be honest, currently I’m not sure if that’s even possible.
References
Monopolies of Knowledge and Hyperlinks: Addendum
In reference to this post and the changed way in which we use hyperlinks today as compared to the 1990s, a colleague of mine pointed out that there is in fact a website where it’s possible to engage in the kind of free-wheeling jumping from place to place characteristic of the early web. That website is Wikipedia. Getting lost amidst the pages, with no idea how how you ended up wheere you are, is quite common for some people.
I suspect it may not be a coincidence that, of all the large websites still around today, Wikipedia is alsoone of the few that is still 100% funded by donations, not by advertising revenue. They have no incentive to try and get rated highly in Google. It’s intriguing that Wikipedia pages often show up very highly in Google search results regardless.
Maurice Newman, skepticism, and the contemporary politics of knowledge
I probably shouldn’t go off my main subject like this, not least on a topic that attracts more irate drive-by commenting than any other topic I know, but it’s turned out to be a day of looking at scientific claims with a skeptical eye, so….
As widely reported in the Australian media, Maurice Newman, Tony Abbott’s chief business advisor wrote an op-ed in The Australian claiming that the whole idea of climate change is being put forward by the UN in order to subvert capitalism, freedom and democracy (and possibly Mom and Apple Pie as well). He’s received a lot of well-deserved mockery online. The premise itself is laughable on its face. Is it really worth it to actually look over the specific claims he’s making?
I do have an unfortunate tendency to try and do that, and I did indeed make the attempt. Right here and now, I’m not going to go over every claim. I’m very wary of succumbing to what some opponents of pseudo-science call the Gish Gallop, a debating tactic relying on overwhelming your opponent with a mountain of apparent “facts”, all quick and easy to present but difficult and time-consuming to debunk. There are quite a few such “facts” in Newman’s op-ed. I’m just going to focus on one, as it’s highly revealing about a number of things.
Newman wrote the following:
Make no mistake, climate change is a must-win battlefield for authoritarians and fellow travellers. As Timothy Wirth, president of the UN Foundation, says: “Even if the (climate change) theory is wrong, we will be doing the right thing in terms of economic and environmental policy.”
The interesting thing about this alleged quote from Timothy Wirth is that Newman provided no source, just like he failed to do for every other “fact” he wrote (a very useful tactic in the Gish Gallop). The first impulse from a 21st century citizen when confronted with a questionable claim – Google it – turns up a mountain of hits for this alleged quote. Every single one of the results in the first 3 pages is either a site dedicated to climate “skepticism” (the very reason they come up in Google is the reason the scare-quotes are well-deserved), or a site oriented to showing why everything remotely left-wing is evil incarnate (I’d call them “right wing sites”, but I’ve met enough sane right-wingers not to generalise these hate-sites as representative of the entire political right, so I’ll call them by a slightly more accurate name of “anti-leftists”). Not a single site that I looked at provided any attribution for this quote
This shows two things. First, Newman almost certainly learned of this alleged quotation from one of these sites. Second, Newman is not a skeptic. He made no attempt to check the validity of this quote, but believed it anyway.
What is skepticism? It doesn’t mean flat-out refusing to believe something even when there’s evidence that it’s occurring (such as there as with anthropegenic climate change). It also doesn’t mean automatically refusing to actually look for evidence in favour of a position when the evidence is lacking. A skeptic should still be willing to change their mind in the face of convincing evidence. So I was skeptical of the validity of this alleged Wirth quote, but I continued to look for evidence of its veracity.
Interestingly, the most fruitful line of pursuit came from Wikipedia, but not from a Wikipedia article. Timothy Wirth is a public figure, so he has a Wikipedia page. The alleged quote doesn’t appear there, which by itself doesn’t say much. However, once you flip to the Talk page, things get interesting.
In her latest work, danah boyd directly confronts the issue of contemporary American teens using Wikipedia for school-work. Against the conventional practice of overtly or covertly encouraging students to avoid it, she makes the interesting claim that the real value of Wikipedia as a learning tool is on the Talk page. There, she says, you can trace the very process of knowledge production as it occurred. In the case of Timothy Wirth, the Talk page suggests that the alleged quote was present at one point but got removed, with the explanation that there was no primary source provided. You can see, further, attempts to find that primary source, and specific details, not readily findable with a raw Google search, emerging in the process of trying to justify its inclusion on Wirth’s Wikipedia page.
It may come as no surprise that the findings on the Talk page so far strongly indicate that the quote, if valid, has been manged. While Newman’s article strongly implies that the “economic and environmental policy” at issue has something to do with “authoritarians and their fellow travellers”, an alternate version, quoted in an article from Real Clear Politics, narrows the focus down merely to energy policy:
Sen. Timothy E. Wirth, D-Colo., said it in 1988, as the National Journal reported. “What we’ve got to do in energy conservation is (to) try to ride the global warming issue. Even if the theory of global warming is wrong, to have approached global warming as if it is real means energy conservation, so we will be doing the right thing anyway in terms of economic policy and environmental policy.”
We finally have an alleged primary source: National Journal, 1988. Sadly, these archives don’t appear to be online in a readily-accessible form, although further info from the Talk page suggests that the title of the National Journal article in question is “Less Burning, No Tears”. A Google Scholar on this title is somewhat fruitful.
There is a citation to this National Journal artice in the peer-reviewed journal “energy and environment”. In a journal article critiquing the merits of focusing on energy policy as the primary means of addressing climate change, the authors quote Senator Timothy Wirth as saying the following:
What we’ve got to do in energy conservation is try to ride the global warming issue. Even if the theory of global warming is wrong, to have approached global warming as if it is real means energy conservation, so we will be doing the right thing anyway in terms of economic policy and environmental policy.
The quote, in that form, does appear to be accurate. The journal article quoting it describes it as an example of the belief that the best way to address global warming is through energy policy (although I think best practice would have been to cite it as “cited in Stansfield 1988: it’s not a direct quote from Stansfield). It doesn’t delve into the issue raised by the quote about whether or not it’s pragmatic to go ahead with changing energy policy as if global warming was occurring even in the face of possibly uncertainty about it’s reality.
There’s certainly a pragmatic argument to be made about what’s best to do in the face of available scientific evidence, and that’s not a scientific question about what is actually occurring. It takes a lot to get from there to the routine trotting out of this quote as some sort of proof of a hidden agenda behind the claims about what actually is happening to the climate, though. I thought it was pretty conventional wisdom in philosophical and scientific circles that is and ought questions are of two different orders.
Also important, particularly, for Newman’s conspiracy claims about the UN, is that Newman describes the (mangled) quote from Timothy Wirth as coming from the “president of the UN Foundation”. But Wirth held no position at the UN when this quote most likely appeared, in 1988. He was a US Senator, nothing more. Did Newman bother to check this? I think we all know the answer to that question.
In terms of the contemporary politics of knowledge, I think this demonstrates pretty well that the use of the term “skeptic” is extremely inappropriate when applied to climate change deniers like Maurice Newman. Skepticism would entail an equitable evaluation of evidence, not a one-sided credulity towards the supposed meaning of mangled and misinterpreted quotations. Such deniers do not deserve the “skeptic” label they have misappropriated for themselves.
In terms of the politics of knowledge of Wikipedia, it suggests that Wikipedia can actually be a pretty good filter for reliable information, if the quality of discussion on a Talk page is good: better than Google in this case. Also, danah boyd may be right.
In terms of the general politics of knowledge online, this seems like a very good example of the echo chamber effect, and an intriguing case study in what happens when someone who was stuck in that echo chamber, like Newman apparently was, dares to venture outside of it. Look like Australia’s general interest intermediaries are still doing their job. For now.
Inaugural Issue: Persona Studies Journal, Vol 1, Issue 1
Persona Studies is a new, open-access, online academic journal being run out of Deakin University. Its remit is the concept of persona, and how that concept may frame contemporary culture. The journal’s very first issue is now online. Click here to go to it.
I’ve had the privilege of seeing some of the difficult process of getting this journal started, and I’ve also indirectly helped out with some review work. I’m happy to see it get off the ground.
The Call for Papers for the next issue is already out. The intended theme is “work(ing) personas”. More detail on the CFP is available here.
Monopolies of Knowledge: A Better Conception, Illustrated by the History of Hyperlinks
Primitive understandings of Harold Innis’ concept of “monopolies of knowledge” regard it as information-hoarding. What I think of as “vulgar Innisianism” treats such hoarding as deliberately motivated, initiated and maintained by an elite who intentionally use the monopoly to preserve their elite status. Both these simplistic assumptions greatly under-estimate the importance and explanatory power of the concept.
In an article dedicated to showing how the new concept of “deep links” in mobile apps isn’t actually all that new, Scott Rosenberg includes a discussion of how the nature of hyperlinks on the Web has changed, and how the Google search engine is heavily implicated in that change. It illustrates ways in which monopolies of knowledge can be conceived that go beyond primitive and vulgar Innisianism.
As noted in the article, the original idea behind hyperlinks was to create hypertext. Hypertext wasn’t a technological form so much as it was a concept. The concept was text that wasn’t linear and determinate. A reader could shift to and from different texts, backtracking and diversifying the trajectory of their reading at will. The hyperlink, as originally implemented via the protocols developed by Tim Berners-Lee, offered a way of partially implementing this notion. The hyperlink could be, and in the early days of the web often was, a way of linking a word or phrase occurring in the middle of a text to another document, or even another section of another document. Rosenberg notes the experience that this created:
Here’s the hardest thing to remember about discovering links at the dawn of the Web: They were fun. As journalist Gary Wolf put it in the lead of a 1994 Wired piece that introduced the Web browser Mosaic to a wide readership: “Mosaic is not the most direct way to find online information. Nor is it the most powerful. It is merely the most pleasurable way… By following the links — click, and the linked document appears — you can travel through the online world along paths of whim and intuition.”
James W Carey (Communication of Culture, pp. 148-9) provided a more complex idea of monopolies of knowledge that goes beyond mere information-hoarding. While acknowledging that one form of monopoly could refer to the hoarding of “factual information or data”, he claimed:
There is, however, a more stringent sense of the meaning of a monopoly of knowledge. When one speaks, let us say, of the monopoly of religious knowledge, of the institutional church, one is not referring to the control of particles of information. Instead, one is referring to control of the entire system of thought, or paradigm.
Monopolies of knowledge don’t just apply to information. If they can apply to control of entire systems of thought, they can refer to modes of knowledge that aren’t obviously informational. They can affect the answer to questions like “what is a hyperlink used for? And why?”, for instance. They can change the nature of know-how, and of interpretation of reality.
Rosenberg more or less argues that this is exactly what happened to the know-how associated with creating and understanding hyperlinks, courtesy of the Google search engine. And in doing so it centralised their power. Initially, Rosenberg claims, the “power” of hyperlinks resided in their ability to “subvert hierarchy”. But the Google search engine operated on a different assumption about what hyperlinks were: it “showed us that links could be read as signals of authority and value”. It basically redefined the answer to the question of what a hyperlink is for, and why.
And it managed to propagate that redefined answer. It did so because thinking of hyperlinks in this way was just so useful. It turned searching the web from a hit-and-miss affair to one where you can almost always find the most relevant result to your query, often with only the vaguest idea about how to formulate your query string. But in doing so, it removed the original conception of what a hyperlink could be:
Links suddenly weren’t so much fun any more. They stopped serving us as an alternative way of thinking about and creating informational relationships; they settled into a functional role. They became tools for navigating websites and pointers for sharing content on social networks. Finally, links became click-bait — transparent come-ons for traffic in an accelerating race to the bottom of our brainstems. We found ourselves arguing whether links help us see connections or just distract us or make us stupid.
Rosenberg touches on the vulgar version of Innisian monopolies when he points out that Google makes money because “they put a price tag” on links. But he offers a way of understanding a more expansive, and less vulgar, version of it, when he points out that Google, just by existing, has changed the default practice of hyperlinking and the default understanding of what hyperlinks are for, and that Google specifically did not intend to do this. They wanted to help people find information, not redefine what “hyperlinking” meant. Even so, that is what happened.
Like many of his core concepts, Innis never specifically defined what he meant by the concept of “monopolies of knowledge”. It falls to those who seek to build on his work to try not only to understand their potential applicability today, but how best to conceptualise them today so that they might best be applicable. By counting interpretive, pragmatic and other forms of knowledge, and by avoiding the temptation to always ascribe either the creation of a monopoly or its maintenance to deliberate intentionality, I think this concept of Innis’ can be applied much more fruitfully to the contemporary communication environment.
An example to consider: Facebook’s Internet.org project gets criticised because of the financial incentive that lurks behind it. A more pressing critique, one that sees the project as potentially creating problems not intended by the project creator, would ask if, simply by existing, Internet.org might transform the understanding of what the Internet actually is. My initial answer is that, without necessarily meaning to, it drastically changes the nature of the Internet into one in which the Internet is, for all intents and purposes, Facebook.
The binge-watch: a viewing practice against the digital stereotype
With the launch of Netflix in Australia, a somewhat new form of involvement with media will likely become much more widespread: the binge-watch.
Binge-watching a TV series simply means watching every episode of a multi-episode TV series in one sitting. I suspect that it only became a common phenomon over the last ten years or so, as the ability to acquire entire seasons’ worth of a TV show became cheaper and easier, and the increasing tendency in TV for season-long arcs in their story-telling, so that viewers are better off watching all prior episodes of a show before they watch a new one.
Of interest to me is the way that binge-watching to some extent bucks the general trends associated with media use in the so-called “digital age”. True enough, as Manuel Castells points out in the new preface to the 2010 reprinting of his seminal “network society” trilogy, television programs themselves are increasingly watched on computers, or even mobile devices these days, and are increasingly done so at the time and place of a viewer’s choosing, not at a time pre-programmed by a TV station. Castells can reasonably say that “The Web has…transformed television” (p. xxvii) on this basis, on the consumder end mainly by making the act of watching a more individualistic, less communal experience. It’s notable here how such a transformation deviates from the claimed overall thrust of the “digital transformation”.
Ever since the 1990s, a strong, recurring argument made in favour of the Net over television, or “new media” over “old media”, was that the Net and associated new media were fundamentally, in their essence, interactive. This was not only a qualitative distinction, but according to its proponents, an empowering one. No longer would we have to sit by and passively absorb the content of mass media, as “Second-Wave” institutions and their ossified standards of standardisation gave way to the new and vibrant “Third-Wave” world of de-massification. With the new media regime, we would regain the control over the creation of our own culture that massification of the media in the early twentieth century had stolen from us. And yet, here we are where one of the newer modes of involving oneself with a cultural product is, at least during its consumption, a version of the same pre-Net media configuration that, if anything, intensifies many of those aspects of it that the proponents of the new interactivity disliked: a message broadcast by a unitary producer to a receiver who can’t talk back, and who consumes in isolation from other consumers.
Granted, there’s plenty of person-to-person interactivity these days that goes on around TV viewing after its consumption, via online fan communities and the like, but the experience on which that person-to-person interactivity depends is still one which doesn’t fit the version of interactivity normatively that to do this day is still to some extent promised by “new media” in the “digital age”. Binge-watching, in itself, is consuming, not “prosuming”, let alone “produsing”.
That said, the practice of binge-watching also violates a claim about the trajectory of media development that is critical of “new media” developments. Within the academic literature, there’s the occasional claim that digital technologies transform communication away from an emphasis on dialogue and meaning and towards an emphasis on mere connectedness. Vincent Miller frames this in terms of digital technologies becoming increasingly oriented towards “phatic communication”, or communication aimed at performatively indicating a social connection, rather than aimed at dialogue or exchange of ideas. He sees this orientation away from actual information exchange as including an orientation away from concern about the commodification of information by media platforms, and worries about the consequences of that.
Miller associates the trend towards an increasingly phatic orientation to communication with the trajectory of media forms towards transmitting ever-shorter amounts of text, from blogging to social networking through to micro-blogging. He sees this trend as a core part of “digital culture”, arising in part because the demands of “connected presence” make “the time-saving role of compressed phatic communication” much more important (p. 395). Yet the tendency of binge-watching isn’t the compression of time spent watching TV. Binge-watching is a significant increase in the time watching, as compared to earlier TV use. In fact, it’s not all that uncommon to hear people speak of 12-hour marathons of watching a television show. Sure, they’re presumably taking meal and toilet breaks in there somewhere, but still…
It may seem like a simple point, but it can be very easy to get caught up in the rhetoric of the “digital age”, much of which implicitly or explicitly claims a clear, single trajectory to the development of digital media which uniformly applies to all media types and media experiences. As the phenomenon of binge-watching shows, this isn’t true. Consumption of television may indeed have transformed in the wake of the Web, but that transformation seems to me to be more of an intensification of the original television experience rather than, say, the shift towards choose-your-own-adventure style interactivity in TV story-telling that some of the more pious zealots of the New Media Revolution(tm) were imagining in the early 1990s. And that isn’t necessarily a bad thing.