Reading with Suspicion

all science would be superfluous if the outward appearance and the essence of things directly coincided.

Marx, Capital, Volume 3.

Marx’s remark about the role and function of science, to get below appearances, seems strange in a world defined by positivism. The idea that the role of science is to unmask or demystify is at odds with the idea of science as the registering of data, the finding of patterns, and the drawing of conclusions. It makes more sense when we consider that the German word for the humanities - Geisteswissenschaften or “spirit sciences” - draws no distinction between a science of interpretation (the humanities) and the social or natural sciences. Indeed, one of the main exponents of hermeneutics - the discipline of interpretation - Wilhelm Dilthey, was concerned with finding a common base methodology for both the natural and human sciences.

In Paul Ricoeur’s book on hermeneutics, Freud and Interpretation, he proposes two different kinds of interpretive practices. Because hermeneutics developed out of Biblical criticism, one form of hermeneutics was involved in “the recollection of meaning”, i.e. finding out what a text meant when its language and cultural references had long passed into history. Ricoeur identified a second form of hermeneutics which he considered an “exercise in suspicion”, interpretation as “reduction of the illusions and lies of consciousness”. This second form of hermeneutics was centred around a “school of suspicion”, Marx, Nietszche, and Freud, each of whom exemplified a particular form of this hermeneutics of suspicion.

If we go back to the intention they had in common, we find in it the decision to look upon the whole of consciousness primarily as “false” consciousness.

Ricoeur, Freud and Intepretation.

Ricoeur argues that Marx, Nietzsche, and Freud can be considered inheritors of Cartesian skepticism. But where Descartes and his successors know “that things are doubtful, that they are not such as they appear… [they do] not doubt that consciousness is such as it appears to itself; in consciousness, meaning and consciousness of meaning coincide”. (In the language of contemporary philosophy, our consciousness of the contents of our own minds is incorrigible). In Marx, Nietsche, and Freud, however, consciousness itself has become doubtful. Marx demystified the ways material practices unconsciously determine consciousness (in the form of ideology); Nietzsche doubted the Greek inheritance of Reason; and Freud undermined the transparency and immediacy of the individual psyche.

Generally speaking, anyone who has been influenced by these three or their successors develops a habit of reading with suspicion, even - or perhaps especially - when reading Marx, Nietzsche, or Freud. There are exceptions, of course - vulgar Marxists tend not to read Marx or Lenin with suspicion; Nietzschean edgelords tend not to read Nietzsche with suspicion; perhaps the most valuable thing one can do as a devotee of either Marx or Nietzsche is to learn to read oneself with suspicion. At any rate, the habit of reading with suspicion tends to draw criticisms of cynicism, relativism, immorality, because it goes against the dominant liberal grain of reading with trust.

Liberalism’s need to erase or repress all questions of power excludes power from any reading. Any text must be taken at face value until proven otherwise (and proof never comes). We need to trust… schools, the government, the church, Big Brother, Artificial Intelligence, whatever is on offer. To read with suspicion is at best ungenerous and at worst an attack on liberal principles, tolerance, and social peace.

In the recent debates around the “qualitative leap” made by GPT-3 (now 4) and ChatGPT, the defenders of AI as a Great Leap Forward think it is churlish to read AI with suspicion. For example, Scott Aaronson, who works for OpenAI developing the “theoretical foundations of AI safety”, put it this way:

I’m asked to fear an alien who’s far smarter than I am, solely because it’s alien and because it’s so smart … even if it hasn’t yet lifted a finger against me or anyone else. I’m asked to play the bully this time, to knock the AI’s books to the ground, maybe even unplug it using the physical muscles that I have and it lacks, lest the AI plot against me and my friends using its admittedly superior intellect.

Leaving aside for a moment the idea that someone working in AI thinks the expressions “far smarter” and “superior intellect” are anything but incoherent, Aaronson’s view is that because an alien superintelligence “hasn’t lifted a finger” against us, that we should trust it. Now, the problem with the liberal vision of trust, the reason why Marx, Nietzsche, and Freud can be dismissed as members of a “school of suspicion”, is that liberalism can only see things in terms of a trust/attack binary. Note how Aaronson constrasts“not trusting” a superintelligence immediately with playing the bully. A Marxist interpretation would suggest that this is due to liberalism’s role as ideology of capitalism, determined by the binary nature of advanced capitalist class society. But why should the opposite of trust be attack?

Another interpretation would suggest that the emphasis here on “trust because it hasn’t done anything yet” is due to the positivism of natural science. An observation either occurred or did not occur. If an observation did not occur then a scientist is not justified in doing anything. And most importantly, every observation occurs in isolation. To have a theory of how technology works under capitalism is to prejudge and bias our observations (this of course corresponds with the popularity of Bayesianism in contemporary culture). We - the suspicious - are unjustified in saying: we know from past experience/history/study, how technology operates in capitalist society, a society riven by class/race/gender/sexuality/disability oppression, therefore even if we don’t already know what AI and machine learning will do (exacerbate social oppression in the hands of the capitalist ruling class), we are justified in treating it with suspicion. And suspicion does not mean immediately “becoming the bully”. It means not falling for the hype, hype we are also supposed to just trust.

But how far can we trust an AI expert who writes the following paragraph?:

OK, but what’s the goal of ChatGPT? Depending on your level of description, you could say it’s “to be friendly, helpful, and inoffensive,” or “to minimize loss in predicting the next token,” or both, or neither. I think we should consider the possibility that powerful AIs will not be best understood in terms of the monomanaical pursuit of a single goal—as most of us aren’t, and as GPT isn’t either. Future AIs could have partial goals, malleable goals, or differing goals depending on how you look at them. And if “the pursuit and application of wisdom” is one of the goals, then I’m just enough of a moral realist to think that that would preclude the superintelligence that harvests the iron from our blood to make more paperclips.

When we read a statement that a machine can have a goal, we should read that with distrust. Machines don’t have goals, and pretending that they do hides (mystified, obscures) those who do have goals: capital, white-supremacists, the police. Somehow a self-proclaimed “moral realist” adopts the weirdly unrealist position that machines can and do have goals.

The last statement in that paragraph brings in Aaronson’s own distrust (!) of the orthogonality thesis. In basic terms, the orthogonality thesis states that there is no necessary relationship between intelligence and morality, that a “superintelligent” being could be amoral or immoral (or evil). Aaronson disputes the orthogonality thesis, which leads him to argue that a “smarter” AI must necessarily be “more moral”. But Aaronson uses a strange illustration to support his rejection of orthogonality. He argues that all the “dumb people” were on the side of the Nazis in World War 2 (“when you look into it [the Nazis with PhDs] were mostly mediocrities, second-raters full of resentment for their first-rate colleagues”) while all the Big Brains were on the side of the allies (“they had, if I’m not mistaken, all the philosophers who wrote clearly and made sense”).

WWII was (among other things) a gargantuan, civilization-scale test of the Orthogonality Thesis. And the result was that the more moral side ultimately prevailed, seemingly not completely at random but in part because, by being more moral, it was able to attract the smarter and more thoughtful people. There are many reasons for pessimism in today’s world; that observation about WWII is perhaps my best reason for optimism.

If we interpret this argument from a position of trust it might seem plausible, leaving aside the weird American obsession with rank and quantification of intelligence. But once we engage in a hermeneutics of suspicion, one glaring omission sticks out. The side that won the war “by being more moral”, the side that “was able to attract the smarter and more thoughtful people” dropped two nuclear bombs on civilian populations. Let that sink in for a moment.

Imagine leaving Hiroshima and Nagasaki out of your analysis of the moral order implied by Allied victory. Imagine suggesting that morality won out because the Big Brains defeated the Nazis while not acknowledging the millions of people horrifically murdered by American atomic bombs. But we aren’t supposed to think about Fat Man and Little Boy because they completely destroy Aaronson’s refutation of the orthogonality thesis. If, as he says, all the Big Brains were on the side of the Allies, and the Allies dropped nuclear bombs on civilians, then that suggests the orthogonality thesis is correct. If the Allies were “more moral” and they still committed such a heinous act, then the orthogonality thesis stands.

(To be clear, I don’t hold to the orthogonality thesis because I don’t think expressions like “more moral” or “more intelligent” are meaningful. But this does demonstrate the importance of reading with suspicion).

I think one of the main objections to a hermeneutics of suspicion is not only that it goes against a positivistic, hard-science approach to human phenomena, but that it requires that we understand the role of power at play in all such phenomena. Defenders of AI don’t want us to think about the way technology serves power under capitalism. They want us to think that machines have goals of their own, rather than the standard capitalist goals of helping capitalists devalue and replace labour, exploit and oppress the periphery, and murder millions of innocent civilians who simply by existing challenge the world hegemon.

A hermeneutics of suspicion is no guarantee of anything. But it does help to unmask and demystify the things people say and do. Because people say and do things with an agenda - there’s no such thing as a neutral position or a neutral act - and “suspicion” is merely the term used for trying to figure out what that agenda is. Suspicion need not be a pejorative term, but it will always be a challenge to the dominant order of things.

Previous
Previous

Every Test is a Turing Test

Next
Next

ChatGPT, Commodities, and Tools