Podcast

Episode 88: What’s the Point?

A warning for my fellow pedants out there; I’m going to be using “Large Language Models” (LLMs) and “AI” how they’re being used colloquially, which is to say almost interchangeably. I know there are differences and that what is being referred to as “AI” in many of the cases I’m about to discuss are actually LLMs.

Also, if you are imperiously anti-AI, someone who would never ever EVER use it for anything at all, good. I get it. I’m mostly on your side, but please take your foot slightly off the gas for the sake of this conversation. And if you can tell what real-life incident I’m about to fictionalize—in the most charitable way I can—please put your favorite hot take about the author aside too.

Because what I want to do here is zoom out and look at a publishing situation we’re increasingly confronting, whether we want to or not.

We cool? Good.

Let’s start with a visualization.

Imagine that you are a writer and an outsider when it comes to the literary world. Not that hard to imagine probably, is it? You’ve got more ambition than you have money, but you’ve heard there’s money in self-publishing. And what writer hasn’t dreamed of blowing up, at least once?

Imagine you come up with an idea that you think is pretty frickin cool. You create a plot; you create a concept of your main character and what they’re going to go through. But like most people, you’re overworked and overtired, and let’s face it—writing is hard.

Imagine that you’re hearing from all sides that AI is inevitable, and if you don’t use it, you’re leaving money on the table. You’re gonna get left behind. That’s probably not that hard to imagine either, is it? Most people you know use ChatGPT; some constantly. You’ve heard that some indie authors are using LLM “assistance”, and some are even making big money generating entire books.

Imagine that at first you use an LLM to brainstorm. Even indie authors who say it’s wrong to use it to fully write your books say brainstorming with them is okay. Then you start using it to help you finish sentences when you’re stuck. And then it’s just kind of a natural progression to paragraphs, then pages. The technology makes it easy, the programs keep asking you if you want more help, and hell if you’re not going to take it. You justify it to yourself by saying that these are your original ideas, and coming up with prompts is a skill on its own? And it’s not like Amazon forbids you from putting books with AI content up there. They just ask you to check a little box and take your word for it.

I know I’m losing some of you by now, but please stick with me a little bit longer.

Imagine that you feel a twinge of discomfort when you see people get called out for using AI on Instagram or TikTok. But you’ve also gotten the message that lots of readers don’t really care. They’re just hungry for more content than normal writers can keep up with on their own. They bug authors about when they’re going to put out a new book. They get bored and forget about the ones who don’t produce at a pace that suits them.

Imagine you finish your book. You’re not made of money. And more and more indie authors say that, with stuff like ProWritingAid and Grammarly around, you don’t really have to pay for editing anymore. Why do editors charge so much anyway if the software can just fix all your spelling and grammar mistakes? Sounds like a scam.

Imagine you post the book, and it goes viral on TikTok. Now you’re making money, just like you were hoping you would. Sure, there are a few reviews that say your book seems like AI slop, some on Goodreads and across social media platforms. But you let it slide because overall things are going pretty well.

Now, imagine another one of your biggest dreams comes true—one of the big five publishers sees how well your book is selling and offers you a publishing deal. You’ve been chosen. You’re one of the special ones. You’ve got what every writer wants.

Imagine the company offers you a contract and tells you they’re going to give your book a new cover, a professional edit, and marketing and sales support. You are now a known author with a highly anticipated release, and you are surrounded by high profile gatekeepers and influencers singing your praises.

And imagine that, while you’re drowning in all those adulations, a reporter at one of the biggest publications in the world notices the AI allegations on Goodreads and starts digging. Then, they release an article naming you and claiming your book is written by AI.

Imagine that the internet runs with it. Some guy with an AI detection tool runs your book through it and the numbers are damning. Reviewers who suspected you are now getting thousands and thousands of hits on their accounts and channels.

What do you think your doting publisher is going to do next? Do you think they’re going to come out and defend you? Do you think they’re going to ask for proof, then send that proof to that huge publication that just outed you to clear your name?

Nope. You’re going right under that bus, kiddo, without even a chance to defend yourself. You’re just a number on a profit-and-loss statement to them, and guess what happens to that profit-and-loss statement if one of the world’s most-read publications pans your book? Nothing that benefits you.

Instead, congratulations. You effed around and unfortunately for you, you got to be the one who found out in the most high-profile way possible. You and your book are now a capsule in tech and media history, leading up to the place where the past and the uncertain future converge, where we’re on the verge of not being able to tell what’s human and what isn’t anymore.

A beacon of the book publishing singularity, maybe?

Nah. It’s much more boring than that.

So, What Are “The Rules” Exactly?

With self-publishing, particularly on Amazon KDP, it’s a stretch to even call AI policies “rules.” Some self-published authors are publicly flaunting their AI generated books, claiming they’re “winning the race” and making six figures by churning out more books than a human could ever write on their own. I’ll link a Reddit thread that challenges the claim that these authors make that much money, but regardless, people are free to post AI generated work on Amazon, and some readers don’t mind.

I haven’t personally talked to any fiction readers who don’t mind a fully AI-generated book, but there have to be a few. And the more the LLMs train on our work, it gets harder and harder to tell.

The actual Amazon guidelines were first posted in 2023, at the urging of the Authors Guild. In short, they make it mandatory for authors and publishers to disclose whether text or images in a book are AI-generated.

Here’s what the site says:
We require you to inform us of AI-generated content (text, images, or translations) when you publish a new book or make edits to and republish an existing book through KDP. AI-generated images include cover and interior images and artwork. You are not required to disclose AI-assisted content. We distinguish between AI-generated and AI-assisted content as follows:

  • AI-generated: We define AI-generated content as text, images, or translations created by an AI-based tool. If you used an AI-based tool to create the actual content (whether text, images, or translations), it is considered “AI-generated,” even if you applied substantial edits afterwards.
  • AI-assisted: If you created the content yourself, and used AI-based tools to edit, refine, error-check, or otherwise improve that content (whether text or images), then it is considered “AI-assisted” and not “AI-generated.” Similarly, if you used an AI-based tool to brainstorm and generate ideas, but ultimately created the text or images yourself, this is also considered “AI-assisted” and not “AI-generated.” It is not necessary to inform us of the use of such tools or processes.

How do they know? When the author is uploading a book to KDP, they check “yes” or “no.” That’s it.

Interesting to note: In the Bookbub AI author’s survey that came out last year, 74% of authors don’t disclose their AI usage to readers. Granted, that number includes people whose work is “AI-assisted”, but I thought it was worth mentioning because almost all of this is based on self disclosure. Almost all of this is based on the honor system.

We have AI checkers, but the consensus is they’re not super reliable, or they at least wildly vary in their reliability. There isn’t a lot of trust there yet, and where there is a lot of trust, there are also a lot of false positives and a lot of tears. As LLMs are trained on more and more of our human writing, it’s going to get harder and harder to “tell” and the standards for what those “tells” are will change.

(Side note: speaking of “you can always tell”, I’m making an effort to never link to the trans-panic stoking New York Times in my show notes. However, the sources I am linking will eventually take you to the site if you don’t feel like I didn’t already give you enough information.)

The Authors Guild is trying to put together a “Human Authored” Certification, but even they don’t trust AI detectors to give accurate results. And accusing people of AI use when it isn’t true isn’t a small thing. So at this point, they’re stuck using the honor system too.

And man, do we live in a world of dishonor right now.

Traditional Pub’s AI Stances (Or Lack Thereof)

So far, traditional publishing’s position on AI use is murky, but the big companies can’t really be said to be anti-AI wholesale. Many are using it for things like metadata and marketing copy generation, but not specifically book content. If it weren’t for the environmental concerns I have, and the way execs are overestimating how many staff they can cut based on these capabilities, I’d be a little more open-minded about it. In the way it’s currently deployed, I still raise an eyebrow (or two).

But there’s that common AI booster line that really sticks a CEO’s craw—especially one who is hired from a different industry. That is: without AI adoption, you will be left behind. Trad publishing already feels left behind, and in many ways, they are. They aren’t tastemakers anymore, and they haven’t been for years, some might argue decades. They wait for trends to surface and take off on their own before making moves. And those trends move so fast that the months-long publishing schedule makes them antsy.

Getting even further behind seems like a catastrophe.

In 2023, HarperCollins was the first company to make a licensing deal with an unnamed AI company to train large language models on their books. For a little more context: Newscorp owns HarperCollins. If you’re wondering why that rings a bell, they also own Fox News, which obviously hasn’t been doing all that they’ve been doing for the past quarter century for the money. Wink.

The unnamed AI company is offering a one time payment of $5,000 per book, which gets split in half between the author and HarperCollins. It’s supposed to be a compromise, because AI companies have already proven that they’re willing to train their models on books they haven’t paid for. Some might go as far as calling them “stolen”—see the Bartz vs. Anthropic settlement.

For now, fiction readers and authors are, on the whole, more likely to be extremely anti-AI. I’ve referred to the Bookbub and Gotham Ghostwriter reports on writer’s use of AI in past episodes, but the data supports the gulf between genres. For copywriters, ghostwriters, nonfiction authors, and the like, there’s a lot less hesitation to use AI tools although actually generating text still isn’t at the top of the list for most.

If you read through the comments from fiction writers, however, there is a much stronger stance being taken for all uses across the board. There’s more of an “I’d rather die” and “this is the death knell for art and humanity” trend in those comments. 

So while nonfiction books—more precisely, self-published nonfiction books—might have more wiggle room when it comes to LLM usage, fiction readers are less forgiving. And I think that’s reflected on social media, too, particularly Instagram Threads.

So when a massive company like, say, Hachette takes a stand against AI use in their books, it’s likely a marketing tactic based on which way the wind is blowing. It’s certainly not an objection to the technology itself, and not something I expect to stay rigid. Business decisions on the part of big publishers may morph over the years—and since traditional companies are increasingly late adopters—I’d say we can look to self-published work to see how things are trending.

But as of now, people who read traditionally published fiction seem solidly against their books being generated by AI, and that’s what publishing companies seem to be deferring to for now.

What Can Happen in the Handover

I used to tell authors it was very unlikely that their previously self-published book would be picked up by a traditional publisher. Their book was already out, and since the author is expected to do most of the marketing and publicity themselves, it would be a reasonable assumption that they would have already sold all they could.

But that standard is shifting. In 2024, 49 deals handed over the rights of previously self-published books to trad publishers. Last year, in 2025, that number jumped to 93. It’s still not a ton, but it’s not nothing either. 

(Also, nonfiction writers, please take note: the acquired self-pubbed books were predominantly romance and other genre fiction titles.)

My first impulse regarding AI generated fiction is to say…what’s the point? Can’t we enjoy creating something for the sake of creating something? But I’ve kind of answered my own question there. Because we use AI for things we don’t want to do ourselves, and when I see people letting generative AI write things for them, it tells me the person creating the prompts didn’t actually want to write it—or at least that they’d  rather be doing something else.

 And that only confirms my suspicion that nobody wants to be making LinkedIn posts because, oh my god, the trash. THE TRASH.

When art becomes a commodity, the joy of creation barely matters—but that’s been true since publishing began. But it’s also true that many people who get into publishing sincerely love literature and creativity and originality, and these people feel a level of responsibility for the culture into which these books enter and the effects these books have on said culture. But a lot of the people who care aren’t the ones making the highest-level calls. And they tend to be overworked, underpaid, and viewed as expendable.

The scale between these positions—and I’m talking about the commodification versus the purveyance of art—is increasingly out of balance, and when books become “content” and all content is in competition, profits win. A big part of the allure of picking up a self-published book is that it’s expected to be cheaper. It’s seen as a sure thing: there’s proof of sales and a pricey chunk of production has already been done.

In one of the articles I’ll link that directly addresses the event that inspired this episode, AI in publishing expert Thad McIlroy says, “The main reason a publisher acquires rights to a self-published book is all the online chatter (and the accompanying sales activity).”

Incidentally, Thad’s view on the recent statement from Hachette on AI-generated work is that it will only encourage authors to lie about their AI usage. While he leans a little more toward the “this is inevitable” view of LLMs than I’m comfortable with, I do think that’s a valid point—especially if past behavior is an indicator of how companies will behave in the future. If you prioritize easy-to-acquire content, and you want it fast, and you’re willing to gloss over quality control and due diligence, these things are going to fly under the radar more often, especially as the technology advances.

If the book is a product, and the prototype has been tested already, it’s easy to see as something that can be repackaged on autopilot. The precious time that editors are allowed to spend digging into a book is budgeted elsewhere. And the honor system is the only thing you can rely on to keep to your professed standard.

But as we’ve seen recently, being found out for breaking those standards, or even being suspected of breaking them, can cost you a lot.

Learning the Rules the Hard Way

When an author moves from one set of publishing conventions to a completely different one, things will get lost in translation. Typically, this is what agents are for. Self-published authors have leverage in a way they didn’t use to; whereas publishers once wanted all the rights—audio, print, ebooks, etc.,—they’re more content just to settle for print. But just because they’ve relaxed their grip doesn’t mean a self-published author isn’t at risk when they sign.

I don’t really know what the agent situation has been for these cross-over authors. But if you are a self-published author, and a major publishing company tells you that “You don’t need an agent”—proceed with caution. A small press might have the time to dedicate to educating you, checking your writing for red flags, and helping you understand what is and isn’t going to fly. But the bigger the company you’re working with, the less you can be confident that they’re looking out for you. Your editor might be an angel, but they’re not always the person making the business decisions. They might not even be the ones reviewing your book.

An agent is there to help you navigate the rules, and even if querying is hard, if you’ve already got a deal on the table, you have a better chance of getting the help you need. They might seem like an unnecessary middleman, but people who work at publishing companies are swamped. Again: self-published books are acquired with the assumption that production will save them money. You need to advocate for yourself, or have someone to advocate for you, even if people think you’re annoying.

But…if you have indeed used generative AI and are afraid to disclose that to a publisher, it sounds like your conscience is already telling you what you should do. If you know the risk you’re taking, you have to be ready for it to blow up in your face.

Normally, it irritates me to see people default to calling everything they don’t like AI. Like a lot. It’s tedious as hell, and can be incredibly damaging to authors, even if you can prove they’re false accusations. But there is a small but loud group of readers who have made it their personal mission to interrogate authors whose work they find suspicious; well, not so much interrogate as make the most sanctimonious callout posts you’ve ever seen in your life.

A lot of writers are hurt in the process, most of all people who aren’t doing it. And often, it just makes the accuser look like this is the first time they’ve ever read a book. Like, buddy, we’re the ones who taught the machines how to do that, and not everyone has a voice distinct enough to dazzle your free online AI-checker. If you’re accusing someone, you need stronger arguments than em-dashes and the rule of threes, or, worst of all, having a vocabulary greater than ten thousand words. 

But if you’re a writer concerned about these types of accusations, I’m not sure what else to recommend other than keeping a record of your draft history. Or, if you’re brave, you can just tell your accusers to fuck off and see what happens, I guess. Because chances are if someone is to the point of saying something publicly, you’re already getting, uuuuh…left behind.

By the way, that bit about telling them to fuck off isn’t a recommendation. If you decide to do it, I take no responsibility for the results.

If you want a comprehensive and thoughtful guide to AI and copyright protection, I’m linking to another one of Jane Friedman’s posts. She’s a lot more thorough and equanimous than I am.

Implications and Alternatives

To me, a lot of our conundrum about AI use comes down to a lack of imagination. This whole cultural shift to AI is making what we talk about and how we talk about it more homogenous. And I mean all of us—including people who have already completely sworn off LLMs. The parroting we see from LLMs is throwing the slop that humans have created back in our faces.

We were churning out rapid release books before this all started, with little concern for quality. We were all mimicking each other’s marketing speak and boss-babe hustle culture talking points and conspiratorial political rhetoric. I don’t want to be too judgmental of people (including myself) who are trying to survive the only way we can in the world as it is: by scrapping to make money.

Most of the glamor of being a starving artist is gone—if not all of it. All the same, when all our language and technology centers on short-term money makers, we’re surrendering to the idea that human creation is only worth what the highest number of people will pay for it.

The inevitable result of a focus on commodity over quality is this: that books are made by machines. That whatever costs less for the corporations  to produce will be lifted up over things made by human minds that aren’t the “sure thing.” And until the machines start unionizing, they’re going to seem like a much better deal than paying fallible human beings. 

Although, by the time the machines start unionizing, things are going to look a lot different than they do now—one way or another.

But many people who still love reading love it because of its power to connect us, to advance human thought and invention. In a way, that’s what futurists and tech optimists are hoping would come from AI as well. And in some cases it does (though not really any that have to do with generating words). But focusing on what maximizes profits in the short term restricts the use of both books and new technologies to something that keeps us stuck. It makes it easier for publishing companies to take safe bets on regurgitating words concocted by the internet’s various garbage patches.

When everyone is using generative AI to express themselves, the information available becomes recycled until it’s sucked dry as a ball of lint. When every learning source is lumped together into a single model, the answers you get to your questions are about as reliable as a source-free Facebook post. When everything is a copy of a copy of a copy, who are we supporting in the development and preservation of new knowledge? Who is doing the thinking and research to create more to feed the machine?

Now, I’ll say “fuck generative AI” and “fuck data centers” all day long in the spirit of what most people mean when they say it. In fact, I’ll wear a t-shirt that says it. But what I really mean is fuck AI in its current form. 

It does not have to be like this.

The more I look into AI technology as a whole, and even language models, the more I mourn what could be. We could have it so good. We could have data centers that integrate with our ecosystem instead of draining communities of their resources and pillaging the global south. We could have contained language models that help revive dying languages and preserve culture and history and work in tandem with researchers instead of gutting universities and sealing even more knowledge from people who could use it for good.

We could have so much more than a place to offload our thinking and substitute for actual human friends. More than something to write the LinkedIn posts we hate or give us bad advice repurposed from some doofus on a defunct forum post. More than “writing” mediocre books that give us a little bit of money and no emotional satisfaction.

But those opportunities don’t come when everything is mass-produced just for a consumer market and all the information is lumped into a single source that poops back and forth forever. It comes from smaller, more well-curated, cared for, and intentionally built systems. 

(Yes, this is me back on my decentralization bullshit.)

You can say “there’s nothing new under the sun”, but are you sure? Have you been looking? Have you tried? Have you wondered what could happen if we stop looking at everything as products and stopped looking at every new technology as just another way to sell products?

I don’t know how we can get to something better, but I do know we can’t get there without wondering or without seeking it out. And that’s less about bitching and moaning than it is about educating ourselves, challenging structures, and conceiving of new ones.

I’ve got a lot of links and sources in the essay version of this podcast on my website: hybridpubscout.com, and some of them are also in the show notes on your favorite podcast platform. Also, there are a couple of books I’ve added to the HPS Bookshop.org shop: Empire of AI by Karen Hao and The AI Con by Prof. Emily M. Bender and Dr. Alex Hanna. And yes, those are affiliate links. So far I’ve only read the first one, but I’ve ordered the second, and Professor Bender is one of the authors of the famous paper On the Dangers of Stochastic Parrots that Google tried to suppress when it was first published.

If you have thoughts, you can email me at emily@hybridpubscout.com, find me on LinkedIn as Em Einolander, or follow me on Bluesky @emilyeino. 

Want to get started? Book a 15-minute chat with Emily!