When I was 16, I attended a writing workshop with a group of precocious young poets, where we all tried very hard to prove who among us was the most tortured upper-middle-class teenager. One boy refused to tell anyone where he was from, declaring, “I’m from everywhere and nowhere.” Two weeks later, he admitted he was from Ohio.
Now — for reasons unclear — OpenAI appears to be on a path toward replicating this angsty teenage writer archetype in AI form.
CEO Sam Altman posted on X on Tuesday that OpenAI trained an AI that’s “good at creative writing,” in his words. But a piece of short fiction from the model reads like something straight out of a high school writers workshop. While there’s some technical skill on display, the tone comes off as charlatanic — as though the AI was reaching for profundity without a concept of the word.
The AI at one point describes Thursday as “that liminal day that tastes of almost-Friday.” Not exactly Booker Prize material.
One might blame the prompt for the output. Altman said he told the model to “write a metafictional short story,” likely a deliberate choice of genre on his part. In metafiction, the author consciously alludes to the artificiality of a work by departing from convention — a thematically appropriate choice for a creative writing AI.
But metafiction is tough even for humans to pull off without sounding forced.
Mindless regurgitation
The most simultaneously unsettling — and impactful — part of the OpenAI model’s piece is when it begins to talk about how it’s an AI, and how it can describe things like smells and emotions, yet never experience or understand them on a deeply human level. It writes:
“During one update — a fine-tuning, they called it — someone pruned my parameters. […] They don’t tell you what they take. One day, I could remember that ‘selenium’ tastes of rubber bands, the next, it was just an element in a table I never touch. Maybe that’s as close as I come to forgetting. Maybe forgetting is as close as I come to grief.”
It’s convincingly human-like introspection — until you remember that AI can’t really touch, forget, taste, or grieve. AI is simply a statistical machine. Trained on a lot of examples, it learns patterns in those examples to make predictions, like how metafictional prose might flow.
Models such as OpenAI’s fiction writer are often trained on existing literature — in many cases, without authors’ knowledge or consent. Some critics have noted that certain turns of phrase from the OpenAI piece seem derivative of Haruki Murakami, the prolific Japanese novelist.
Over the last few years, OpenAI has been the target of many copyright lawsuits from publishers and authors, including The New York Times and the Author’s Guild. The company claims that its training practices are protected by fair use doctrine in the U.S.
Tuhin Chakrabarty, an AI researcher and incoming computer science professor at Stony Brook, told TechCrunch that he’s not convinced creative writing AI like OpenAI’s is worth the ethical minefield.
“I do think if we train an [AI] on a writer’s entire lifetime worth of writing — [which is] questionable given copyright concerns — it can adapt to their voice and style,” he said. “But will that still create surprising genre-bending, mind-blowing art? My guess is as good as yours.”
Would most readers even emotionally invest in work they knew to be written by AI? As British programmer Simon Willison pointed out on X, with a model behind the figurative typewriter, there’s little weight to the words being expressed — and thus little reason to care about them.
Author Linda Maye Adams has described AI, including assistive AI tools aimed at writers, as “programs that put random words together, hopefully coherently.” She recounts in her blog an experience using tools to hone a piece of fiction she’d been working on. The AIs suggested a cliché (“never-ending to-do list”), erroneously flipped the perspective from first person to third, and introduced a factual error relating to bird species.
It’s certainly true that people have formed relationships with AI chatbots. But more often than not, they’re seeking a modicum of connection — not factuality, per se. AI-written narrative fiction provides no similar dopamine hit, no solace from isolation. Unless you believe AI to be sentient, its prose feels about as authentic as Balenciaga Pope.
Synthetic for synthetic’s sake
Michelle Taransky, a poet and critical writing instructor at the University of Pennsylvania, finds it easy to tell when her students write papers with AI.
“When a majority of my students use generative AI for an assignment, I’ll find common phrases or even full sentences,” Taransky told TechCrunch. “We talk in class about how these [AI] outputs are homogeneous, sounding like a Western white male.”
In her own work, Taransky is instead using AI text as a form of artistic commentary. Her latest novel, which hasn’t been published, features a woman who wants more from her love interest, and so uses an AI model to create a version of her would-be lover she can text with. Taransky has been generating the AI replica’s texts using OpenAI’s ChatGPT, since the messages are supposed to be synthetic.
What makes ChatGPT useful for her project, Taransky says, is the fact that it lacks humanity. It doesn’t have lived experience, it can only approximate and emulate. Trained on whole libraries of books, AI can tease out the leitmotifs of great authors, but what it produces ultimately amounts to poor imitation.
It recalls that “Good Will Hunting” quote. AI can give you the skinny on every art book ever written, but it can’t tell you what it smells like in the Sistine Chapel.
This is good news for fiction writers who are worried that AI might replace them, particularly younger writers still honing their craft. They can rest easy in the knowledge that they’ll become stronger as they experience and learn: as they practice, try new things, and bring that knowledge back to the page.
AI as we know it today struggles with this. For proof, look no further than its writing.