The poststructuralist impulse, insofar as it had any unity, was, as I understand it, all about denying the possibility of coherence and stable meaning in a text. Contrary to a modernist impulse to believe that a text “speaks for itself”, poststructuralists in general, and Derrida especially, denied that any text could have a stable meaning. Every word, every phrase, every sentence: they can only be understood in their relation to other words, phrases, and sentences, whose meaning only makes sense in relation to others, and so on and so forth, in a never-ending chain of difference or deferral (or Différance). I wonder if this critique has largely ended the traditional humanities’ interest in training students in the interpretation of text.
I also wonder if this leads to the problem that most people these days can’t interpret texts for shit.
They just assume that text “speaks for itself” as they apply their own interpretations in their attempt to understand them. The unexamined operations of interpretation are confused with the supposed plain meaning of the text.
This is the problematic misunderstanding of interpretation underpinning the very idea that the most reliable and accurate source of information is a document dump. Julian Assange pioneered this problematic belief with what he termed “scientific journalism“:
Assange told me, “I want to set up a new standard: ‘scientific journalism.’ If you publish a paper on DNA, you are required, by all the good biological journals, to submit the data that has informed your research—the idea being that people will replicate it, check it, verify it. So this is something that needs to be done for journalism as well. There is an immediate power imbalance, in that readers are unable to verify what they are being told, and that leads to abuse.” Because Assange publishes his source material, he believes that WikiLeaks is free to offer its analysis, no matter how speculative.
Assange here conflates the “data” of scientific publication with “source material”. They are not the same thing. “Data” is what appears in the source material of scientific research once very strict rules of interpretation have been accepted and adhered to. The strictest rules of interpretation would be the rules governing mathematical language. 1 + 1 = 3 is always going to be interpreted as “wrong” here, and rejected by anyone with a modicum of knowledge about arithmetic. The rules of interpretation for plain language are far more flexible, far more ambiguous, and it is often far, far harder to conclude which particular set of rules should apply to a particular piece of language. The upshot of this is that providing source material is no guarantee of protection against misinformation. All it requires is the introduction of a particular interpretation (a “spin” or “framing” of the source documents) in order to misinform. And this is exactly what Wikileaks does.
Sociologist Zeynep Tufekci has described the process:
WikiLeaks seems to have a playbook for its disinformation campaigns. The first step is to dump many documents at once — rather than allowing journalists to scrutinize them and absorb their significance before publication. The second step is to sensationalize the material with misleading news releases and tweets. The third step is to sit back and watch as the news media unwittingly promotes the WikiLeaks agenda under the auspices of independent reporting.
So the solution is to just read the documents themselves, ignoring whatever “speculative” analysis Wikileaks adds? I don’t think so, no. See, the poststructuralists had a point when they said that there is no underpinning essence of meaning in a text. Texts flat out do not speak for themselves: interpretation is necessary, and this is itself an unstable process, with no essential key. Their restriction of analysis just to text led to a dead end, though, from which I think the humanities are still struggling to recover. One attempt to recover from it draws a little from pragmatism. In this approach, texts in themselves may provide no inherent meaning, but text, if related to empirical activity and human action, can be made to convey stable meaning. Mathematical language is perhaps the best example of this success, but it is a poor model for everyday language, and everyday language should not be modeled on mathematical or other scientific language, as Assange wrongly wants to do.
Here are a few of the interpretive issues that prevent document dumps from being a reliable source of information:
- Intent is really hard to infer. Mathematical and scientific language gets around this by purging intent from permitted language altogether. This is not going to work in ordinary language, where the intent of the writer is often the main thing the reader is interested in. But without clear linguistic markers indicating intent, intent is flat out unknowable. And speaking of “linguistic markers”…
- Context is everything. This is true for even mathematical and scientific language. True, “1” and “10” can stand on their own pretty well (though if you thought the second example was “ten”, it’s actually in binary notation and represents “two”, HA!), but any entry into empirical science immediately raises the question “1 of what? 10 of what?” The International System of Units is a global agreement by which the relationship between mathematical measures and empirical reality are universally standardised (even while the US insists on using the old imperial measurements for too many things). In everyday language, the relationship between concepts and reality is highly, highly variable. An attempt to standardise the relationship appears to have led the philosopher Wittgenstein to conclude that the very nature of ordinary language makes such universal standardisation impossible. Quoting from the linked Stanford Philosophy page, Wittgenstein claimed “the meaning of a word is its use in the language”. Unless a word’s meaning is made consistent (and consequently restricted in that consistent usage to a specialised context, such as scientific language), its use, and consequently its meaning, will vary. The specific meaning, therefore, depends on the specific context. How to determine the context? Well..
- For document dumps, most if not all of the context is missing. If there is an email in a document dump discussing something, what non-email discussions occurred prior to and subsequent to the composition and sending of that email? Every conversation, every interaction, potentially relates to every other. Scientific language, again, tries to standardise this, through the use of referencing. Everyday language does not, and should not. What’s the point of in-jokes, which depend on a secret knowledge of a shared prior context, if the context is required to be explicitly explained each and every time? An upshot of this lack of context is that entirely innocent conversations can, without knowledge of the prior context, look decidedly odd. There is an entire conspiracy theory, known as “Pizzagate”, which started from confusion about why John Podesta, a high-level member of the US Democratic Party, and his friends, talked a lot about pizza in emails. Somehow people got the idea that “cheese pizza” was code for “child pornography”, and suddenly a whole host of other innocent connections became interpreted as nefarious.
Zeynep Tufekci has also previously made the argument that releasing private information, without regard for its relevance to the public interest, actually constitutes a form of censorship:
Once, censorship worked by blocking crucial pieces of information. In this era of information overload, censorship works by drowning us in too much undifferentiated information, crippling our ability to focus.
I think Tufekci’s argument here is still too tied to a model of document dumps as raw information provision. The tendency in our “informational” era is to view more information as a good thing, despite the occasional complaints about “information overload”. Provision of documents (“information”), though, is not the same thing as provision of meaningful information. Document dumps presume that meaning exists in documents themselves. As the poststructuralists rightly recognised, it does not. But the ability to impute meaning to a document constitutes a source of great power in the Information Age. An entity that can convince others of the context and intent of particular documents, confident that others who view the documents will be literally unable to determine context and intent from the documents themselves, will be well placed to forward disinformation (or rather, “disinterpretations”), quite effectively.