Home » Posts tagged 'Google'
Tag Archives: Google
I don’t know if it’s age or just studying in the humanities, but I seem to be getting steadily more critical of claims about the inherently emancipating power of the Internet and of new media generally. That said, I’m far from being a cyber-pessimist. But I’m still trying to work out the best way forward for how to examine issues concerning the Internet, especially in the light of how much simplistically determinist rhetoric (both utopian and dystopian) still exists around it even today.
Take one of the, by the standards of Internet time, classic works on what the Internet supposedly meant for the political economy of intellectual production. Yochai Benckler’s 2006 book “The Wealth of Networks” was an analysis of what he called the new “networked information economy”. The key distinction between this “networked information economy” and the earlier “industrial information economy” was that, in the networked information economy, non-monetary motivations for creating and circulating information could compete on an even footing with the creation and circulation of information that was financially motivated. Amateur video makers could and would try to create videos with the expectation that they could and would potentially reach the same level of circulation as Hollywood blockbusters.
A good example of both the promise Benckler saw in the Internet and of the failure of that promise to come to fruition is the example Benckler gave of searching for “Barbie” on Google. When Benckler first wrote his book, the top 10 “Barbie” search results on Google were a mix of sites both positive about and critical of the Mattel doll. Benckler took this to be indicative of the way the networked information environment’s “radically decentralised production modes provide greater freedom to participate effectively in defining the cultural symbols of our day” (p. 277). No longer would the mainstream cultural meaning of Barbie be solely determined in the public sphere by the money Mattel expended on defining it through ads, PR campaigns and the like; critical voices with their own opinions about Barbie, like the site Adios, Barbie, could be found and heard just as easily.
Yet a simple search on Google today for the term “Barbie”yields a very different set of 10 search results from the one’s Benckler found: they are (as of November 2013, almost!) all positive. As the blog “Rough Type”, which examined the search results in March 2013, put it: “If Mattel could simply purchase the first page of Google’s search results for Barbie, the page would look pretty much the same, right down to the photo of the doll’s famously ample chest”. What’s more, Rough Type claims that this bias towards official and positively inclined top 10 results has been observed in Google searches since 2008.
The Rough Type blog sees this Mattel-isation of Barbie search results as an indication that the “‘democratized’ information economy” of the Internet”, once it expanded its purview beyond a handful of pioneers like Benckler, turned out to be not particularly revolutionary or freedom-enhancing, but “bland, homogenized, and infused with a consumerist ethic”. But I think that underestimates the complexity of the relationship between what people do on the web and the way Google search works.
Of course, “what people do on the Web” and “the way Google’s search algorithm works” are not distinct from each other. It’s fairly obvious that, since Google’s search engine rates webpages by the amount of links they get, webpage maintainers’ decisions about which webpages to link to affect those linked wepbages’ position in Google’s search results. But the dominance of Google as a means of finding things on the web means also that people will make decisions about webpage design based on their desire to be found. An entire industry offering search engine optimisation (SEO) for website maintainers has sprung up.
But where things get especially interesting for me is in the relatively recent (in 2011) change to the Google search algorithm that led it to prioritise “newness” in search results. This is a fascinating indication of both what kind information is most valued in our culture (recent, not old), as well as the way that the dominant means of finding information in our culture can both reflect and promote that cultural value. High rankings in search results will come not just to highly linked sites but to sites that have the time and other resources needed to constantly update their sites with”new information – even if that “new information” is nothing more than a blurb about the latest and greatest Barbie doll now on offer.
Would site maintainers still try to have their websites contain “new information” even if they wouldn’t get a higher Google ranking by doing so? Probably. The ad-supported revenue model that sustains most websites today requires that people visit sites (and get exposed to ads) as frequently as possible, which means supplying visitors with a new reason to visit the same site as much as possible. Somehow – I really don’t know how – this relates to the pursuit of novelty in all things implied in a “consumerist ethic”. Hm.
Anyway, I don’t think the makeup of Google’s search results merely reflects a pre-existing and incontestable banality of the population: as sociologists of science and technology have been saying for decades now, technology is not a neutral force. Decisions about how to rank search results affects the way websites are created and maintained; and the different resources available to different site maintainers – even if that resource is merely free time – can be, and are, deliberately channeled into efforts to improve search rankings, with results obviously varying according to the level of resources available to different site maintainers. Mattel and its corporate partners can afford to make a huge effort at improving Barbie’s Google presence; Barbie’s critics, not so much.
To this extent, the search results returned by Google seem to me to be just the start of understanding how cultural values are created and contested both in the operation of Google search itself as well as in the ways differently situated actors might be able to react to Google’s current position as, more or less, the information retrieval service of the contemporary moment.
Addendum: when I ran the “Barbie” Google search in November 2013, the first page of results was, for the most part, either official Barbie-related webpages like the official Barbie Facebook page, sites offering official Barbie-related paraphernalia (usually the doll, but the 10th result was an offer for a “Barbie Dream Cruise” on a cruise ship). I also received some image search results for Barbie which looked to be officially supplied by the corporation….and some contemporary news results about the latest Barbie doll to be released, namely “Jennifer Lopez Barbie”. Some news results merely reported the new doll’s existence, but others were (admittedly quite mildly) critical. For example, this was a news result from the first page of the “Barbie” Google search: “Are the new Jennifer Lopez Barbie dolls curvy enough?“. The distinction between what Mattel might want as front-page Google results, and what actually appears there, still does exist in some small form, apparently. ∞
The word “affordance” is frequently used in writings about new technology. For example, danah boyd has stated that the “affordances” of “networked publics” include things like the persistence of information within them, and the ease by which information can be reproduced and transmitted far beyond its initial context of production.
The simple (but not quite accurate) understanding of the term is that it describes the possible uses that humans can make use of specific technologies. As such, it can sometimes be construed as an example of that great Bugbear of the social studies of technology: technological determinism. Technological determinism is the naive belief that technology is some sort of abstract force which influences society to some degree, but is not itself the product of social forces of any form. Technological determinism comes in several forms – the most common distinction is between “hard” and “soft” determinism, with “softer” versions viewing technology as just one factor shaping society rather than the dominant or only one – but the idea common to all is that technology itself need not be explained as, say, the outcome of competing social pressures, or as influenced by different ideas about what kinds of technologies ought to be developed in what kind of way.
The history of social study of technology since pretty much the late 1970s onwards has been one of uncovering the social forces at work in the creation and development of various technological artefacts and technological systems. The different intentions of different creators and users are considered as much a part of the story of technological creation and development as is the solving of specific problems encountered in the process of creation and development. The domination of the private, petroleum-powered automobile as a system of transport depends as much on consumer decisions about what kind of transport is preferable, and why, along with legislative backing for construction of the necessary transport infrastructure (roads instead of rails), as it does on any alleged “technological superiority” of the petrol-powered care to the alternatives that were being developed at the same time.
At its most radical, this critique of technological determinism goes to the opposite extreme. A school of thought known as “social constructivism” posits that we should ignore any claims that a technological artefact has any “inherent” properties, and that all such properties are actually a matter of contingent interpretation. Different groups can have different interpretations about what a technology actually “is”, and it’s the task of the sociologist of technology to point out where certain groups’ interpretations have been shut down by the claim that a property of technology is “inherent”: it’s actually just the dominant interpretation imposed by the dominant group.
The notion of a “technological affordance” was introduced by a man named Ian Hutchby as a reaction against social constructivism. Sure, he said, there’s some interpretive flexibility about how technologies are developed and get used, but come on: can you really “interpret” a soft drink vending machine as a spaceship and then fly to Mars with it? The materiality of technology matters, and it is this materiality that not only constrains what it is possible to do with technologies so that you can’t fly to Mars in a vending machine, but also actually provides the “stuff” by which technologies can be put to use in the first place.
Hutchby chose the term “affordance” to describe the materially-based constraints on what could be done with specific technological artefacts because it had previously been used in perception studies with a specific meaning. In perception studies, the “affordance” of an object was the use to which it made itself available to others. It was more appropriate to speak about an object’s affordance than, say, its properties, because “affordances” were relational. By this, it was meant that the affordances only manifested and became notable when the entity to which they could be made available actually tried to make them available. Therefore, for Hutchby, the “affordances” of a technological artefact were those properties of it that could be used by individuals for their specific purposes: purposes which were defined by the user of the technology rather than the technology itself. This was Hutchby’s attempt to posit technological artefacts as having effects, but without claiming that those effects exist independently of human social action.
Hutchby quite deliberately dodged the question of how technological affordances came to exist in any specific technological artefact. His intention was to shift the focus of technology studies to an area that he believed had been seriously neglected by the all the attention devoted in social studies of technology to the creation and development of technologies. After the technologies have been developed and deployed, what then? How do technological artefacts and technological systems relate to society at the point that they are no longer being developed? When they have become mundane and “boring”, in other words?
In an era when the Internet has gone mainstream, when certain forms of communication media such as e-mail appear to have largely stabilised in technological form, aren’t we losing something by continuing to study technology in terms of creation and development rather than in terms of the use to which stable technological forms are actually put?
Maybe. But I also think that there’s an assumption here that’s unwarranted. An analysis in terms of the affordances provided by technologies can work if the form of that technology remains stable over a long period of time. But at the time I write this, I can sum up precisely why this assumption is not only wrong, but dangerous, in two words: Google Reader.
The shock announcement to shut down Google Reader provides a very clear lesson about the dangers of assuming the stability and durability of technological artefacts not just in technology studies, but in everyday life. The “affordances” of Google Reader will cease to exist this coming July. Analysing the recent history of Google Reader also gives a very clear indication of how the affordances of this “stable” technology radically changed over the past two years. These changes can’t readily be explained in terms of materiality, but are more appropriately described in terms of corporate politics, market competition and dominating intellectual paradigms. Quite simply, Google Reader lost out to Google Inc’s decision to try to compete with Facebook. In the process they eliminated many of the “social” features of Google Reader that had been built up and replaced it with their clone of Facebook’s “like” button, the “+1” button tied to their new Google Plus networking site. This then destroyed the communities that Google Reader had created. The announcement to retire Google Reader amounts, in my mind, to the completion of Google Inc’s shift away from a paradigm of technological development motivated by the idea that sharing information constituted communities, to one in motivated by the idea that “networked communities” exist – both online and offline – in which information is incidentally shared. For Google’s future technological intentions, the “political public” model of community has given way to the “social networking” model of community: community as founded on communication has become community in which the communication within it is a monetisable commodity.
Google didn’t have to do this. When the announcement was made, people seemed to be in shock that they did. The “durability” of many of the technological artefacts that must be assumed in an analysis focusing on “technological affordances” was exposed as dubious and dependent on continuing corporate patronage. The notion of “technological affordances” could be quite a troublesome concept if it inadvertently obscures the importance of that corporate patronage , and the motivations behind that patronage, for the ongoing existence of technological artefacts in their current form.