This one is exactly what it sounds like.

Writing to Identify – Project #2

A Rhetorical Analysis of Ian Bogost’s “ChatGPT is Dumber Than You Think” by F.K. Carlson

Ian Bogost refrains from burying the lede, in fact, he makes it the primary objective of the article, revisiting it throughout. “ChatGPT is Dumber Than You Think.” There it is, right there in the title. The insult. Not only does Ian believe that ChatGPT is dumb, he thinks it’s even dumber than you realize, specifically noting ChatGPT and other language models inability to comprehend the underlying meaning of the words they’re generating. Bogost even includes a ChatGPT imitation of his own writing in the third sentence of the article “ChatGPT lacks the ability to truly understand the complexity of human language and conversation.” The ChatGPT replication of Bogost’s writing continues, “It is simply trained to generate words based on a given input, but it does not have the ability to truly comprehend the meaning behind those words.” Laid out in several sentences, the title distills it to the essence of Bogost’s point – ChatGPT excels at mimicry, but has no actual understanding of the words it is parroting. In other words, ChatGPT is “dumb.” 

“Extraordinary claims require extraordinary evidence,” is an aphorism known as “The Sagan Standard,” and like a good skeptical logician, Ian Bogost details several examples to validate his point. Bogost writes, “When I asked it to generate an imagist poem in the vein of Ezra Pound or William Carlos Williams about hamburgers…” Bogost quotes the poem ChatGPT generated, then disputes that the generated poem met the requirements. Bogost argues that ChatGPT’s poem is not an imagist poem, and he tells the program as much. ChatGPT replies, “You are correct, the poem that I generated is still not an imagist poem…” Bogost continues to ask ChatGPT to perform specific language oriented tasks, the software complies, then Bogost analyzes the application’s response, and brings them to the attention of the language model. In each instance the language model is quick to admit that it has not followed the instructions as asked. In each case the software is cobbling together language to make an approximation of what is requested.

In another example, Bogost writes, “I asked for source code for an Atari game about scooping cat litter, and the AI sent me valid programming instructions—it understood the assignment—but only disconnected snippets of actual code with the heading comment “is program creates a simple game where the player must use a scoop to pick up their cat’s litters and put them in a trash bin.” It was an icon of the answer I sought rather than the answer itself.” We know what an icon is, but it’s worth further clarification for the sake of this discussion. An icon is a symbol that represents the distillation of a thing, so that a person can instantaneously recognise what the icon represents, but it is not the thing itself. The “recycle bin” or “trash” on a desktop computer is not an actual trash bin, but we recognize what the icon represents, and use it accordingly.

Readers that will be persuaded by Bogost’s argument will first and foremost have to be educated enough, or curious enough with reference to the subject matter, to seek out this piece and to comprehend it. Bogost delves into the differences between perceptual, philosophical, and epistemological replies during his analysis, with no doubt about his audience’s ability to follow along. Bogost knows his audience. He knows they are educated, intellectually curious, and interested in understanding more about new technologies such as ChatGPT. In one example of exposing ChatGPT’s mimicry, Bogost asks it to compose a very specific type of esoteric medieval poem, and then laments that it did not compose the poem’s octosyllabic couplets properly, “When I asked about the matter, it admitted again that, no, the lai it had written was not structured in octosyllabic couplets…” Admittedly, I didn’t know what a “lai” was when I read the article, but was delighted to learn about it through my reading. While I might not be as intelligent as the audience Bogost seeks, I am curious enough to be included.

I believe the audience will find Bogost’s writing compelling, as he paints a valid picture of software that essentially re-orders pieces of language that are generally relevant to the requests. Similar to MidJourneyAI (which I, personally have used to “generate” images), DallE, and others, ChatGPT is cobbling together something “new” out of many pieces of something already existing. Bogost’s argument relies on logos and ethos, without much effort given in appeal to emotion. He doesn’t seem interested in how ChatGPT makes us feel, as much as he is concerned with the facts relating to what ChatGPT does. “GPT and other large language models are aesthetic instruments rather than epistemological ones. Imagine a weird, unholy synthesizer whose buttons sample textual information, style, and semantics. Such a thing is compelling not because it offers answers in the form of text, but because it makes it possible to play text—all the text, almost—like an instrument.” Bogost clarifies his position about ChatGPT being “dumb.” It’s not dumb like a screen door on a submarine, it’s dumb like an adjustable wrench is dumb. In other words, it’s a tool. It does what we need it to do. One of Ian’s points is that many of us are trying to use the tool ChatGPT in an incorrect way, almost like using an adjustable wrench as a hammer. You can do it, but it’s not effective. Better to recognize the tool’s proper implementation, where it will function as needed. 

In using ChatGPT like a hammer, many folks are diving headlong into asking the language model to generate “new” textual content. Bogost argues that this is the incorrect usage of the tool, and it often fares much better as an aid, than it does at doing the thing itself. Bogost writes, “Instead, we should adopt a less ambitious but more likely goal for ChatGPT and its successors: they offer an interface into the textual infinity of digitized life, an otherwise impenetrable space that few humans can use effectively in the present. To explain what I mean by that, let me show you a quite different exchange I had with ChatGPT, one in which I used it to help me find my way through the textual murk rather than to fool me with its prowess as a wordsmith.” After this, Bogost details how the software helped him find a certain household item he was searching for, but of which he couldn’t recall the name. ChatGPT found the name for him. 

Here is how Ian Bogost frames his push to recognize ChatGPT as the tool it is, rather than the tool many wish it to be, “But lacking that knowledge and nevertheless needing to deploy it in order to make sense of the world—this is exactly the kind of act that is very hard to do with computers today. To accomplish something in the world often boils down to mustering a set of stock materials into the expected linguistic form. That’s true for Google or Amazon, where searches for window coverings or anything else now fail most of the time, requiring time-consuming, tightrope-like finagling to get the machinery to point you in even the general direction of an answer.” Bogost proposes that ChatGPT will help us navigate an increasingly abstruse digital textual world, where conventional search engines have become so obsessed with sales metrics and advertising, that they’re no longer relevant or helpful. In other words, Bogost leans toward language models as the new search engines. Not hammers. Not writers. Helpers. 

The author encourages the audience to not be afraid of this technology, but to recognize it for the tool that it is, and use it accordingly. He also reflects that the folks at the helm of ChatGPT don’t even have a specific vision or goal for the use of this technology, saying “But a huge  obstacle stands in the way of achieving it: people, who don’t know what the hell to make of LLMs, ChatGPT, and all the other generative AI systems that have appeared. Their creators haven’t helped, perhaps partly because they don’t know what these things are for either. OpenAI offers no framing for ChatGPT, presenting it as an experiment to help “make AI systems more natural to interact with,” a worthwhile but deeply unambitious goal.” An untrained apprentice may be intimidated when confronting a beam saw or core drill for the first time. Once trained in its use, the tradesperson realizes the tool can be dangerous, but if respected and utilized by a familiar user, the tool won’t cause harm. Bogost refrains from appeals to emotion throughout this piece, relying instead on an appeal to facts, backed up by trust in him, which is validated by his thorough understanding of language, and epistemology. 

The audience should walk away from Bogost’s piece feeling confident about their relationship to this emerging technology. Language models are not going to replace us, they are incapable of doing so at this juncture. General AI is not around the corner, as these language models still don’t even comprehend the language they juggle before they pitch it back to us. Bogost comes at the audience as an expert in language, thinking, and utilizing tools for their intended purpose, rather than running in fear from what we don’t understand. A measured, reasoned tone pervades his writing, building trust in him, and his message. From the closing, “Computers have never been instruments of reason that can solve matters of human concern; they’re just apparatuses that structure human experience through a very particular, extremely powerful method of symbol manipulation. That makes them aesthetic objects as much as functional ones. GPT and its cousins offer an opportunity to take them up on the offer—to use computers not to carry out tasks but to mess around with the world they have created.” As a result of reading the piece, the audience should not fear ChatGPT, but feel as comfortable using it as a journeyman feels wielding a hammer. ChatGPT is “dumb” in the same way a tool is dumb. It can not think, it can not reason. A hammer does not create, but in the hands of a skilled tradesperson, it can be used to create or destroy. Much like ChatGPT. 

Link to Ian Bogost’s “ChatGPT is Dumber Than You Think” from The Atlantic.