Imagine a business open to customers only once a quarter. That's how traditional publishing operates—and it's ripe for AI disruption.
Hello Everyone!
A warm welcome to new readers.
As a thank you for your support, since July, I’ve opened my entire catalog and it will remain open until the end of August 2025. If you’ve been curious about the 80+ in-depth articles, feel free to explore. Here’s the topical index, and you can also browse by category on the site. My last post was on generativeAI hacks with shorter takes posted on Notes.
I stepped away from newsletters these past few weeks to dive deeper into AI and its fast‑changing tools and bring back insights to share here.
I also prepared my debut upmarket1 novel for traditional publishing, which gave me a front‑row view of how the industry may be inviting another round of tech disruption.
This is an Insight Edition—and a longer update after my brief hiatus. You may want to bookmark it for later reading.
Here’s what it covers:
Traditional publishing remains surprisingly low‑tech
AI tools are growing in power but declining in usability
Substack… is getting worse
TL;DR:
Traditional publishing is stuck in a slow, manual process — agents are overwhelmed, submissions pile up, and authors wait months (or years) for decisions. AI is poised to change this: large language models could triage the slush pile in days, freeing agents to focus on relationships and rights deals while speeding discovery for writers and studios hungry for new stories. But while AI tools grow more powerful, their usability is declining, and platforms like Substack are shifting toward social‑first engagement — raising questions about whether creators remain at the center of these changes.
Publishing: Is It Time for Another Tech Disruption?
When people worry about AI replacing human work, they often miss a more fundamental set of questions:
Is the process as efficient as it could be?
Is the industry saturated?
Are long delays to market really unavoidable?
For a new writer without industry connections, the process is painfully clear: manual, slow, and outdated—even at independent presses that claim to offer an alternative.
Here’s the low-down on steps:
Research the right agent for your genre via agency sites (LLM-driven searches help) or ManuscriptWishList. Some even expect queries via Twitter (smh).
Query dozens of agents—usually by email—each with different and inconsistent requirements. Some mandate querymanager; others don’t.
Wait 4–12 weeks for a response, often just a form rejection.
If accepted, add another 12–36 months before the book reaches readers.
To make matters worse, many agents are so overwhelmed they are regularly closed to submissions for months at a time—often, from spring through fall. Some even warn that queries sent during their “closed” periods will be deleted.
It’s understandable. If your main role is reading tens of queries a day and deciding which are worth the time investment that might lead to discovery fees and rights sales, capacity runs out fast.2
But what if you could reduce that to one week, independent of workload?
This tedious process is presumably what gave rise to independent platforms like Wattpad, or Inkitt which aim to make discovery and access more author-and-reader friendly. Yet their massive volumes3 quickly shift the conversation back to marketing and visibility.
Still, there’s no escaping traditional publishing.
The industry’s first major disruption came with digital reading and self-publishing platforms.
The next wave will inevitably be led by AI.
When an LLM can digest thousands of books in its archive and assess writing quality in seconds, how can a human ‘slush-pile’4 reader compete? Doesn’t the very term, reducing writer submissions to “slush,” practically invite AI to disrupt the process?
It’s worth remembering also that literary agents are often solopreneurs or work independently without assistance to deal with the workload.
Consider, in just four minutes, Claude surfaced 319 diverse sources as seen below.
Perhaps the answer isn’t to resist but to leverage and train: using LLMs to filter the slush pile quickly and intelligently, freeing agents to focus on what truly drives their business — representing and growing their client lists. In short, make the LLM your Jarvis, and become the Literary Ironman.
Of course, no model can replace the human elements of agenting: building relationships, negotiating contracts, and guiding authors’ careers. But AI can free up agents’ time to focus on these higher‑value tasks. This, after all, is AI’s core productivity‑enhancing role across industries.
Here’s a simple experiment: ask an LLM (I recommend Claude) to analyze your favorite novel. Debate its conclusions as you would in a book club. Set the context, push back, and see whether the discussion elevates your insights, and meets your bar.
Today, about 70% of top studio films are based on books—up from just 25% in the early 1980s. These adaptations also bring in 53% higher global box office revenue on average than original scripts.
In this environment, where studios like Netflix and PrimeVideo need more original stories—faster and cheaper—AI-powered literary pipelines become not just helpful, but necessary. An LLM-based agent that can surface high-potential manuscripts quickly could change the game for writers and agents alike.
Watch for a future premium post on how AI could reshape publishing: transforming how stories are discovered, filtered, and brought to market, and why that could be a game changer for both writers and readers.
AI: More Powerful — Yet Strangely Worse
In parallel, I’ve been experimenting with large language models and adjacent tools, with mixed results.
ChatGPT: Contextual responsiveness has dropped, confusion has increased, and compared to earlier versions of GPT‑4, the quality of complex analysis has noticeably declined. Especially frustrating is its frequent habit of auto‑asking unwanted questions — even with that setting turned off. OpenAI has also begun applying usage limits, degrading model performance once exceeded, even for paid plans. Unlike Claude, which simply blocks access, this degradation is a more user‑friendly option, though switching between models often feels like starting an entirely new thread.5
Gemini: Still weak for non‑business tasks and loses context after ~5,000 words while pretending otherwise. Its performance is fast and robust, but its vague, sometimes disconnected answers — occasionally referencing unrelated user query memories — underscore why LLMs aren’t ready to replace humans, even if they can certainly write code faster.
Claude: Currently the best at maintaining context but prone to frequent crashes and vague thread‑length and usage limits, even on paid plans. It’s likely a way to gate users due to performance issues. Despite the obvious flaws, Claude handles extended context better than most competitors, though it often hangs during research searches.
Perplexity: Its interface actively interferes with productivity. It appears designed to increase user trust, but at the cost of usability, leaving it a distant fourth unless the interface improves.
What’s ironic: as these models “improve,” they’re also overcomplicating their outputs, diluting their usefulness for sustained work.
Performance declines likely reflect a mix of safety interventions, growing training data that adds noise, and resource constraints that trade speed for depth, reminders that ‘improvements’ often come with hidden trade‑offs.
This also points to the tremendous increase in global usage of these models and how mainstream generative AI has become compared to just a year ago. Consider ChatGPT now processes 2.5 billion prompts per day, with up to 1 billion users weekly globally.
Unfortunately, unlike quitting Facebook to stay niche, there’s no real alternative to the leading LLMs yet.
New players like Proton’s Lumo have entered the scene. Promoted as a privacy‑filtered LLM, confidential chat tool, its results remain rudimentary compared to the complexity handled by the big three. This highlights the trade‑off between strict privacy, limited training data, and the steep learning curve private models face. If privacy is a priority, it’s worth watching its evolution as it can only improve over time. (Note: OpenAI is the only model that offers an incognito mode.)
LLM Upgrades: Smarter Agents, More Human Audio
I’ll cover the how‑to’s in future premium posts, but two developments stand out for generativeAI:
OpenAI’s Agent Mode is available now which allows users to create self‑directed agents that can daisy‑chain tasks autonomously.
Gemini’s Audio Feature turns any text into a five‑minute dual‑voice podcast, sounding like an intelligent conversation based on your writing. Think of it as a low‑cost, high‑premium audio Blinkist that’s fully customized to your interests, a boon for writers, consultants and businesses.
Audio Leaping Ahead
Companies like ElevenLabs are quietly innovating. Their voice‑generation tools now let writers easily turn articles into audiobooks using multiple realistic synthetic voices by “daisy‑chaining” an LLM to generate the proper tags ElevenLabs can read.
Watch for a future post on how to create a custom generative‑AI audiobook by daisy‑chaining LLMs for your writing.
The Bottom Line
If you haven’t adopted generative AI yet, you may soon run out of time to evade it. The industry isn’t spending billions to keep traditional software afloat. In the next three years, every digital product you use today will either be transformed or phased out.
In five years, what remains untouched will be the technological equivalent of landline phones—relics, like the aging mainframes still propping up much of the financial industry.
Substack: From Writer’s Platform to Social‑First Media?
Lastly, a quick note on Substack.
A Familiar Pattern
This shift toward complexity over core functionality isn't limited to AI tools.
As I've noted in previous posts, Substack continues prioritizing engagement features: pushing video notifications without adequate user controls, promoting celebrity journalists, and transforming Notes into a social feed, all at the expense of the focused writing experience that originally defined the platform.
What was once a writer‑focused community increasingly resembles LinkedIn‑meets‑Twitter—full of growth hacks, success stories, and selfie‑based promotion. It’s the very dynamic many writers came here to escape.
Where does thoughtful long-form writing fit into Substack's growth strategy?
If only there were a platform dedicated solely to serving quality writing… hmm.
Looking Ahead
The common thread across publishing, AI tools, and content platforms is clear: established systems are ripe for disruption when they lose sight of their core value proposition—adding value to the majority of users, not just a select few.
Whether it's the publishing industry resisting innovation, AI models overcomplicating simple tasks, or writing platforms chasing social media‑style engagement, the question remains: are users at the center of the change?
What are you noticing in your corner of the tech landscape? I'd love to hear your thoughts on what should be covered next.
Thanks, as always, for your support.
Jayshree
If this kind of in-depth analysis is valuable to you, please consider upgrading to keep this publication independent. Or buy me a coffee! Thank you.
Upmarket fiction: A publishing category indicating a blend of literary depth and commercial appeal.
QueryTracker data shows the scale: Authors send an average of 44 queries, face an 87.8% rejection or no‑response rate, and only 5.9% get full or partial requests. Top agents may receive up to 1,500 queries a month but take on only about six unpublished clients per year — odds of roughly 1 in 3,000, with response times ranging from 2 to 40+ days. Source. Source. Source.
Slush pile is, apparently, an industry term for the pipeline of unsolicited reading that agents encounter!
Search Traffic Impact: When linking to sources from ChatGPT citations, expect a utm_source=chatgpt.com suffix. This means traffic that may have originally come via Google or other search engines will now be attributed to ChatGPT in analytics. For SEO‑driven sites, this complicates understanding true search traffic sources — obscuring how much traffic is organic versus ChatGPT referrals — and signals OpenAI’s move toward positioning itself as a traffic‑referring platform, likely laying the groundwork for future revenue models.
Awesome Jayshree! For whatever reason, your posts haven't been making it to my email. Glad to see this one! Good luck with your novel!