Use of Generative AI Not Recommended
I do not recommend the use of generative AI for creating papers from scratch, writing component paragraphs, or even smoothing out drafts. AI text has an inhuman quality which makes it unsuitable for serious purposes, and I do not think that any future developments of AI will change this. This inhuman quality arises because AI generates text by predicting what word is most likely to come next in a sentence, not through anything like a process of thought. An AI that really thinks, rather than predicts, is still unimaginable; where science fiction films have pretended to imagine such a thing, they have really been imagining a real human being dressed up as a robot.
Certainly for academics the use of generative AI to create papers from scratch is beside the point, since the aim of writing papers is to communicate, which means one mind reaching out and contacting another. If communication is the goal, it makes no sense to erase ourselves from the communicative relation by substituting AI. I write on the assumption that my audience wants to know what I, personally, think.
Yet perhaps at least AI could be used to speed up the evolution from draft to final product: that is, to take over exactly that role played by academic copyeditors; so my opinion here will hardly be surprising. But the problem is that whether a text has been generated from scratch by AI, or only smoothed out, it still has a character very different from human writing.
The Value of Mistakes
There are many excellent and very funny guides now available which teach how to spot AI-generated text, the overuse of em-dashes and the incessant resort to lists being prominent signs. But rather than looking at specific signals, I am interested in the general characteristics of AI text. And here I think there are three main things. First, the text is superficially perfect, free of typographical mistakes or grammatical slips; second, the style is bland, friendly, and free of self-doubt (though choosing different settings can vary this to some extent); third, the content regularly contains absurd and preposterous errors.
None of these characteristics by itself marks a text as AI-generated, since human writing can have any of them individually (although the errors in AI text are of a different order than any a human could make); but human writing would not have all these characteristics at the same time: they comprise an inconsistent set. If someone writes to us in a friendly and informal tone, we expect to find minor errors, indicating that the writer was relaxed, did not labour over their words, and expected us to receive them in the same spirit. On the other hand, the absence of minor error would indicate that the author has worked hard to polish their text, something which a person would normally only do after first checking for gross errors; conversely, where there is gross error we expect to find many minor errors too.
These expectations are just a few of the vast number of unconscious presuppositions we bring to the act of reading. Without them, understanding would not be possible at all, since it is these presuppositions that allow us to narrow down the infinitude of possible interpretations of any sentence to a much smaller set which our finite minds can process. In one of the stories by Jorge Luis Borges there is an alien book found in an infinite library which says something like: How do you, reading these words, know that they are written in English, and not in another language which is indistinguishable from English on the page but has a entirely different grammatical and semantic structure? The answer is that our shared humanity rules this out.
Understanding Depends on Shared Humanity
When we begin to suspect that the text we are reading has been influenced by AI, these presuppositions lose their authority. The shared humanity which guarantees uniformity of interpretation can no longer be relied upon: the result is that the act of reading becomes exhausting. Rational instinct warns us that what we are reading cannot be trusted, but still the superficial perfection of the text keeps signalling that we can have confidence in it: the eye tries to escape this cognitive dissonance by drifting off the line; the mind resists what the text is saying for fear that it is senseless; and we feel the absence of any author even as we resent the friendly and overconfident tone. AI text compromises the basic human agreement on which understanding depends: this is what causes the sense of vertigo as we read it.
It is much easier to edit text that contains errors than it is to check text that has been smoothed by AI. The presence of error reveals the personality of the author, and supplies clues as to how that personality could be better reflected in the text. The suspicion that AI has been applied to the text breaks the link between surface perfection and deeper reliability. The editor is now obliged to interrogate every sentence, no matter how well formed, for signs that AI has smoothed away the author’s intended meaning and substituted instead a well-polished absurdity.
My Unwise and Complacent Conclusion
AI certainly produces text with impressive speed, and perhaps it does have a role in the context of purely technical writing; this vast and resource-hungry digital infrastructure is also quite good at comic pastiche. But although editing is widely thought to be among the professions liable to be driven extinct by it, and I am often advised to make use of AI tools so as to secure a niche for myself in what is supposed to be a rapidly contracting sector, in fact I see little significant impact. The vogue for generative AI will not overcome our existential desire to communicate, and the value we place upon our shared humanity.