TL;DR
A writer tested AI trained on his past posts and found the outputs superficially convincing but wrong-toned and directionally off. He argues that the daily struggle of writing—thinking, researching, sitting with being stuck—is central to creating distinctive work in ways AI cannot replace.
What happened
In a January 2026 blog post the author recounted an experiment: a third party trained an AI on his prior pieces, then fed it headlines and opening paragraphs from his 2025 posts to see if the model could complete them in his voice. After comparing a handful of AI-generated continuations to the originals, he judged the results to fall into an uncanny-valley zone—initially plausible but noticeably off on closer inspection. The machine often shifted argument or tone, and expressed confidence where the human author would display doubt. A friend, Nick Wignall, connected this experience to comments by Ezra Klein, who said he uses AI only for light research or data structuring because automated summaries miss the particular questions a writer brings. The post argues that the real value of daily writing lies in the labor of thinking and wrestling with drafts—work that, according to the author, AI cannot supply.
Why it matters
- AI can produce fluent text that may appear correct at first glance but still misalign with an author’s intent or emphasis.
- Routine writing practices function as cognitive training and a promise to readers, benefits AI-generated output does not deliver.
- Relying on AI to remove creative obstacles risks replacing deep, original connections with shallow, average-driven filler.
- If many writers take shortcuts, distinct human-driven work could become more valuable by contrast.
Key facts
- The piece was published as a daily blog entry on January 2, 2026.
- A researcher trained an AI model on the author’s past writing and provided headlines and opening paragraphs from 2025 posts.
- The author reviewed a small sample of AI completions and found them plausibly written but frequently off in argument or tone.
- The author described the outputs as exhibiting an 'uncanny valley' effect: superficially right but subtly wrong.
- Writer Nick Wignall was on the catchup call discussing the experiment with the author.
- Ezra Klein was cited as using AI only for light research or structuring data rather than for core writing work.
- Klein and the author argue that outsourcing research or the thinking stage can destroy the unique connections a writer makes.
- The author believes that the 'suck' of doing difficult creative work is essential; shortcuts lead to the lowest common denominator.
What to watch next
- Whether models trained on individual authors can eventually match their judgment and argumentative choices: not confirmed in the source.
- If more writers adopt AI to avoid the difficult, iterative parts of writing and how that affects overall content quality: not confirmed in the source.
- How long-form reading habits change in response to AI summarization tools and whether that shifts value toward full-text reading: not confirmed in the source.
Quick glossary
- Uncanny valley: A phenomenon where an imitation appears almost, but not exactly, like the real thing and thus causes a sense of unease or mistrust.
- Language model: A type of AI trained on large amounts of text to generate or predict words and sentences.
- Summarization: The process of producing a shorter version of a text that highlights its main points, often used by tools to condense information.
- Creative ritual: A repeated practice or routine that supports skill development, idea generation, and consistent production of work.
Reader FAQ
Did the AI successfully reproduce the author’s voice?
The AI produced plausible continuations but frequently veered in argument or tone; the author judged them as noticeably off.
Will the author switch to using AI to write his daily blog?
No—the author said he would not use AI for that purpose because the daily practice of writing is a personal cognitive and creative discipline.
How does Ezra Klein view AI for writing work?
According to the post, Klein uses AI only for light research or organizing data and warns that automated summaries can miss a writer’s specific questions.
Does the post claim AI will make human writing obsolete?
Not confirmed in the source.

The Suck Is Why We’re Here January 2, 2026 Daily Blog On a catchup call, I told my friend Nick Wignall how someone had trained an AI model to write…
Sources
- The Suck Is Why We're Here
- With all due respect, why the fuck do some writers use AI …
- The harm & hypocrisy of AI art – Matt Corrall
- My Open Letter To That Open Letter About AI In Writing …
Related posts
- The Riven (1997) Diffs: Seeing Riven Differently Through Render Analysis
- Comparing benefits of every-third-day versus daily low-dose aspirin therapy
- Scaling Latent Reasoning with Ouro: Looped Language Models for Pre‑training