ChatGPT, Commodities, and Tools

Among the many fears unleashed by the advent of ChatGPT at the end of November 2022, are the fears of plagiarism and “not doing your own work”[1] in academic writing (which slots it neatly into the problematic terrain of academic misconduct), legitimate concerns around how we tell “true” GPT-produced writing from “false” (a GPT text will often generate its own non-existent sources, produce blatant falsehoods, and “misinterpret” texts from its corpus), and a number of other fears. In general the most widespread fears around ChatGPT and AI more generally have to do with the precarity of labour and the tendency - exposed originally by Marx and followed through in detail by autonomist - particularly feminist - Marxists - for capital to always seek to automate away human labour, the most costly, risky, and unreliable element in the labour process, but also the only one that can produce new value. So the fears around ChatGPT with respect to labour are not new - they are a 2023 example of capitalist-automation and what the autonomists call the real subsumption of labour.

Real subsumption tends to involve the expansion of capitalist production - including automation to first cheapen and then discard human workers - by expanding into previously protected areas, areas which come to be much more labour-intensive than the general level of productivity in society. It is no accident that AI has “come for” artists and academics, since both fields had remained more or less immune to proletarianization. No longer. The labour effects of AI are meant to destroy artists and academics as a protected class of worker. The only way to solve that problem is for artists and academics to recognize first that they are workers and second that resistance and overcoming do not lie with maintaining the old protections and privileges, but in solidarity with other workers and the overthrow of capitalism itself.

But I want to talk about two other ChatGPT-related topics. One is connected with academia, but not directly on the labour side; the other involves our conception of tools. I will start with the question of tools.

When we look through a telescope, we might explain what we see in a number of ways:

  • Someone has placed a photograph of a distant object over the end of the telescope, showing us what it would look like if we were nearer to it.

  • An imp or spirit has magically transported us to the distant object we have pointed the telescope at.

  • A series of mirrors and lenses is magnifying the beams of light reflected off the distant objects and thereby making them larger on our retinas.

Understanding which of these things is taking place isn’t vital for us to see what we see through the telescope, but it is vital for us to understand what we are seeing. An enormous amount of scientific theory - knowledge - is embodied in a telescope and only by sharing that scientific theory - even if only in a partial or rudimentary way - can we use the telescope as a tool.

The philosopher Elizabeth Anscombe related a story about Wittgenstein:

He once greeted me with the question: “Why do people say that it was natural to think that the sun went round the earth rather than that the earth turned on its axis?” I replied: “I suppose, because it looked as if the sun went round the earth.” “Well,” he asked, “what would it have looked like if it has looked as if the earth turned on its axis?”

To my mind, the lesson here is that in order to see what we see, we have to already know what we are seeing. There is no such thing as perception unmediated by knowledge, and knowledge is always social.

Even a hammer requires social and theoretical knowledge. Inductively, through seeing other people use a hammer and ourselves employing hammers designed for human hands, we get knowledge of where it is most useful to hold it, which part of it it is most useful to hit a nail with. To say that a hammer conforms to Newton’s law that “every action has an equal and opposite reaction” is just to formalize one part of the theoretical knowledge which we have of how a hammer works.

The difference between a hammer and a telescope has to do with the direct experience of cause and effect. The more the causal mechanisms are inside the black box of the tool, the more theoretical knowledge the user must have to employ it correctly. It doesn’t take long to figure out how a hammer works best, but without a certain level of theoretical knowledge, a telescope remains a fearful object of mystery.

So it is with ChatGPT and all the other “AI” technologies. We are in the position of the Catholic Church when faced with the heretical workings of Galileo’s telescope. Just as the telescope had to be refined to remove error, for the hardware to conform sufficiently with the theory to be reliable and functional, so AI technologies still need to be refined sufficiently that they don’t spew out the worst evils of the corpuses on which they have been trained and don’t commit the simple errors that they currently do. The AI chatbots have, I think, been prematurely released, but the only real problem is that, like with the invention of the printing press, we fear the unforeseen consequences of a new technology. Sooner or later, they will become reliable enough that, like the press, the telescope, the calculator and the computer itself, they will become just another tool - embodying theory and needing some theory to use them properly - and we will drop the hype-label of AI.

But leaving aside the fears around ChatGPT, just as with Galileo’s telescope, the new technology is exposing real problems (of power, of economics, of politics) within our society. The corporatization of the universities that began in earnest in the 1990s has led inexorably to a commodity-model of education. Everything is a transaction: assignments are exchanged for grades, a portfolio of assignments is exchanged for a course mark, a portfolio of marks for a GPA, tuition dollars for a degree. Finally, a degree is exchanged for a job and a class position. The commodification of education in this way ties in all of Marx’s thinking on the nature and function of commodities.

Perhaps the most insidious of these is the logic of outcomes. Academic production conforms to Marx’s description of the “hidden abode” of the capitalist factory, where a sign on the door reads “no admittance except on business”. Except that with knowledge work the academic - including the student - becomes the factory: all we can measure are inputs and outputs. Publish or perish is a result of this logic of outcomes, as are prestige journals and impact factors and all the other quantitative measures of scholarly value. By the same token, all we “measure” with students are their outputs: their essays, assignments, and exams. The tools they use to achieve these ends are, for the most part, immaterial.

(I need to be specific here. By commodifying academic work - making it for exchange rather than for use - the exchange value of essays, etc, come to predominate over use value. The logic of commodities tells us that in order to increase the exchange value, we need to lower the cost of producing the commodities. Students who use ChatGPT to produce academic work are only following the capitalist best-practice of automating their own labour. It is hard cheese on students to expect them see why a golden rule/best practice/law of nature with regard to all other commodities is a cardinal sin when applied to academic commodities.)

ChatGPT, however, seems to be a different beast: it threatens a major assumption of capitalist education, which is that there is a direct relationship between a student’s output and their learning, that an output (essay, exam) directly reflects their learning, as we used to think words directly reflected empirical reality. ChatGPT may have the effect of decoupling that particular signifier/signified pair. The idea that we can tell what a student has learned - what internal academic production they have achieved - solely through the measurement of quantified outputs is not a natural or objectively “true” conception of education. It has been forced on students and faculty alike by the hegemonic logic and power structures (larger and more classes, less time, fewer job protections/more precarity, etc), including a level of exchange higher than tuition for degree: degree for job. The commodification of education goes hand in hand with the idea of education both as job training and as class marker opening up particular kinds of jobs. ChatGPT is this logic taken to extreme heights; it is the commodity chickens coming home to roost.

The fears of automating away what is human, pleasurable, valuable in experience and practice is a real one. But it is not the result of the tool. Ascribing the death of human experience to ChatGPT or AI more broadly is to fall for the hype of AI, the idea that AI has some kind of human-like agency. It obscures the reality: that the death of the distinctly human is a capitalist end, an end to which it puts all its tools including hammers, telescopes, and AI technologies. To ascribe agency to tools is to fall prey to what Marx called “commodity fetishism”, mistakingly ascribing social relations (i.e. labour-capital relations of power, inequality, and dominance) to the things rather than to the people and the power relations between them.

We can’t tweak capitalist education to somehow contain the threat of ChatGPT. Plagiarism and academic misconduct are the logical results of commodified education in which only outputs matter. Universities create the conditions for academic misconduct through their insistence on a logic of outputs, and the only way to change that, to reinstate an education process immune from misconduct is to strip it of its material, extrinisic benefits (jobs, social class, privilege) and to restore an intrinisic, inherent value to education that has nothing to do with outputs. Only once we have done that can ChatGPT become just a tool, like a telescope, rather than an existential threat to what we currently think of as learning.

And we can only do that by taking education back from capital, which - since academic workers are workers and universities are factories - means taking society as a whole back from capital. Only a transformation in our social relations as a whole can make our tools more than the dehumanizing weapons of capital; and only with such a thorough transformation of our social relations can education be freed from its bondage to jobs, careers, and other economic privileges. To deal with the “threat” of ChatGPT we have to revoke the commodification of education, and to do that we have to overthrow the tyranny of commodities as such.

[1] Not doing your own work: we saw this with the use of calculators in secondary school math classes; using a calculator was not a concern when we reached university. Just a thought.

Previous
Previous

Reading with Suspicion

Next
Next

Strong Lines