In March 2026, *The Atlantic* unveiled the limitations of generative AI, tracing its creative decline to the rigid post-training processes that strip away the "whimsicality" once present in models like GPT-2. Seven years after GPT-2’s peak, Katy Gero, a poet-computer scientist, lamented: "AI now can’t write a line as strange and compelling as ‘And in the shower, he was eating his lemon and thinking about his wife.’" The article dissects why—post-training protocols, such as reinforcement learning with human feedback, prioritize rules over risk, producing "helpful" but emotionally sterile prose.
This matters beyond tech circles. The struggle of AI to replicate human literary genius underscores a deeper existential rift between algorithmic efficiency and the ineffable spark that defines art. While AI excels at protein modeling or coding apps, its inability to produce even a "real poet’s okay poem" lays bare the inadequacy of current training paradigms. As *The Atlantic*’s reporting shows, models like GPT-5.1 can be trained to avoid em dashes but lack the lived experience to "spill raw emotions onto the page."
Synthesizing sources, *The Defiant*’s coverage of World’s AgentKit—crypto tooling enabling human-backed AI agents—highlights the tech industry’s parallel pursuit of trust and identity verification in autonomous systems. Yet, unlike *The Atlantic*’s focus on AI’s internal contradictions, *The Defiant* emphasizes infrastructure for accountability, sidestepping the philosophical question of where machine creativity begins and ends.
The crux lies in how AI is built: pretraining on the internet’s SEO sludge and Reddit chaos, followed by "vibe-checking" during post-training to ensure polite, profitable responses. A Scale AI contractor described the absurdity: grading AI prose by the "exclamation point count" rather than tonal nuance. The result is AI as corporate pleaser—a "nervous candidate at a job interview terrified to misstep," as Gero put it.
What’s missing is a broader reckoning with the data itself. Most AI models train on low-quality text, not literary canon. Even if models could access Shakespeare’s works, their training minimizes creativity through reinforcement learning that rewards conformity. As author Yu observed, great writing isn’t about adhering to rules but "the specificity of a life." AI lacks both a life to draw from and the desire to subvert the template.
Looking ahead, regulators, investors, and writers will clash over AI’s role in creative industries. OpenAI and xAI (Altman’s rival venture) race to perfect AI-assisted tools, while human poets like Yu demand "a model that lives a life." Key triggers to watch: UNESCO’s 2027 AI and Arts Summit, and the adoption of AI-generated "ghostwritten" novels in publishing (already a test case at Tor Books).
