Sportswriting legend Red Smith once said that writing a column is easy: “All you do is sit down at a typewriter and bleed.” In 2026, though, no blood is required. All you do is sit down at a laptop and have Claude or ChatGPT write the story for you.
That seems to be the takeaway from a cluster of reports from the journalistic front of late. Last month, my colleague Maxwell Zeff wrote about writers who unapologetically generate at least some of their prose via unbylined AI collaborators. The star of his piece was Alex Heath, a tech reporter who said he routinely has AI write drafts based on his notes, interview transcripts, and emails. That same week, The Wall Street Journal profiled Fortune reporter Nick Lichtenberg, who explained to the paper that he leans heavily on AI to churn out his work. He has written 600 stories since July; on one day this past February, he had seven bylines.
Ever since reading these reports—thankfully produced by the human hand—I have been having trouble sleeping. Until recently, the consensus had been that using large language models to actually create commercial prose was verboten. Many publications, including WIRED, have firm guidelines against AI-generated text. We don’t use it for editing, either, which is a less alarming, though still troublesome practice of several others cited in Zeff’s column. The book publishing world, trying to protect itself from an avalanche of self-published slop, is still policing its catalog; Hachette Book Group recently retracted a novel that had apparently relied too much on the output of an LLM. But as the models turn out prose that is becoming increasingly harder to distinguish from human outputs, the convenience and cost savings of using AI for the difficult job of writing are threatening to seep into the mainstream. The walls are starting to crumble.
As one might expect, a lot of people were unhappy to read about this development, particularly those like me whose keyboards are dripping with blood. But the subjects of the stories aren’t backing down. It’s as if they feel the future is on their side. When I contacted Heath—whose work I respect—he confirmed that he had gotten pushback but shrugged it off. “I see AI as a tool,” he says. “I don’t see it as replacing anything— the only thing that’s replaced is drudgery that I didn’t want to do anyway.”
Of course, the hard work of writing is, for people like me, a critical aspect of the whole effort, bringing one’s self to the task of communicating effectively and clearly. Heath thinks that he does connect with readers through his writing—he says that he has trained his AI to sound like him, and his Substack includes personally written tidbits about what he’s up to. On the other hand, he tells me that since he talked to Zeff, he has almost “one-shotted” a couple of his columns. “When I say one-shot, I mean I almost didn’t need to do anything,” he says. But Heath disputes the idea that letting AI write prose for him means that he’s bypassed the thinking process that many believe can only happen though actual writing. “I’m just getting rid of that very messy, painful, zero-to-one blank page,” he says.
The Fortune writer who was the subject of the Journal article also has suffered repercussions, not just from the public but also his friends and colleagues. “I’m feeling a strain in close and personal relationships,” Lichtenberg admitted in an interview with the Reuters Institute for the Study of Journalism. In an email, Fortune’s editor in chief, Alyson Shontell, tried to steer me away from the idea that AI was taking over the jobs of reporters under her watch. “Importantly, [Lichtenberg] is not using it as a writing replacement,” she wrote. “His stories are ai assisted versus ai written. Still lots of ambitious reporting and analysis and reworking he is doing that’s highly original.”
The term “ai-assisted” is doing so much heavy labor here that it deserves its own paycheck. Here’s how Lichtenberg described his workflow to the Journal: He dreams up a headline and prompts Perplexity or Google’s Notebook LM to write an initial draft, which he moves directly into Fortune’s content management system. Only then does he edit the story, applying his knowledge and experience to massage the copy. Then, bang, he publishes it. No blood. No wonder he wrote 600 in less than a year.
And no wonder the idea of letting AI replace the voice of humans is so attractive to news publishers. Those relying on “AI-assistance” claim that these stories are not replacing the work of stylists, but are put to use only in cases where the reader simply wants to consume information, be it a scoop or description of some development. All people want is the facts!
This argument reflects something I’ve often heard from Silicon Valley techies who probably avoided English classes at Stanford. When I was writing my book about Google, Sergey Brin began one interview with a lecture about how books were an inefficient way to explain things. (That didn’t stop Google from scanning millions of books for its search business.) Crypto magnate Samuel Bankman-Fried, in a hagiographic profile funded by the VC firm Sequoia, said, “If you wrote a book, you fucked up, and it should have been a six-paragraph blog post.” (Maybe prison has changed his mind, and he’s now plowing through Robert Caro biographies.) Implicit in this viewpoint is the assumption that human expression gets in the way of pure information, and any human seepage into reporting is to be avoided. The ultimate spokesperson for that point of view is Marc Andreessen, who said in a podcast last month that the act of introspection was a recent and unwelcome development in the human experience.
That concept is so wack that even AI doesn’t accept it—that’s why LLMs are trained to mimic human expression. People crave connection in what they read. But because AI doesn’t live in the actual world, or have actual human experiences, no matter what it writes, or how clever it may be, or how much it takes on the voice of a singular flesh-and-blood writer, it can only play a partial role in human expression.
I think people sense this, and it explains why the reports of those at the forefront of using AI to write stories have been greeted with such enmity. Still, at the risk of being accused of introspection, I wonder whether my gut disgust at this phenomenon is a generational thing, a boomer affectation. I asked Heath, who is 32, and he replied there’s probably something to it. But he also says that younger people are just as stridently against using AI to draft stories. “Those who are 25 to 29 who work in the media hate what I’m doing,” he says. In part, that’s because Gen Z sees AI as a thief stealing their careers before they start.
Heath thinks that one day we’ll look back at this controversy and marvel that it was even a thing—like when people thought that using a typewriter was cheating. I’m not so sure. I’ve actually lived through the transition from typewriter to word processing. I’ve navigated from a print-centric to online world. AI seems different. I do use AI for research and as a way to search through my interviews (which, of course, are transcribed by AI). One particularly useful tool is the aforementioned Notebook LM, where I can dump my interviews and notes and quickly locate who said what.
But as with other LLM products, this tool doesn’t seem satisfied with staying in its cage. It keeps asking me to allow it to do more. It’s always one prompt away from taking all that information I uploaded and producing a draft, maybe in what it considers my own voice.
That’s the red line I hope I will never cross. I’m not shaming those who do. Well, maybe a little. But they genuinely seem to see themselves as living in the future, and they might be right. The wheels are in motion. Fortune isn’t the only venue that’s experimenting in this: Business Insider’s staff policy allows AI “to assist with drafting,” among other uses. Other outlets will surely follow suit. But if the form of “ai-assistance” that includes actual writing catches on, we will all be impoverished by the loss of the human voice. Not to mention the soul. If I ever succumb to temptation and allow Notebook LM, Claude, ChatGPT, or anything to write a draft of this newsletter, or, heaven forbid, a feature, you have permission to send me off to exile. If I don’t do it myself first.
This is an edition of Steven Levy’s Backchannel newsletter. Read previous newsletters here.
