Writers today seem increasingly convinced that large language models represent the future of writing.
As each new model release promises improved capabilities, the chorus of excitement grows louder.
But beneath this enthusiasm lies a fundamental misunderstanding about what writing actually is, and why — despite surface level improvements — these LLMs always seem to fall short.
Two Thought Experiments
For your consideration, two intuition pumps:
Scenario #1
You're walking along the shore when the receding waves leave an impression in the sand that reads:
"To be or not to be, that is the question."
The next wave washes over, reshapes the sand, and reveals the next line in the stanza:
"Whether 'tis nobler in the mind to suffer the slings and arrows of outrageous fortune..."
What's the appropriate response here?
- Do you marvel at the ocean's craftsmanship?
- Do you note improvements in the tide's writing style? (Look, it's not saying 'delve' anymore!)
- Or do you recognize it as a remarkable coincidence—marks that merely resemble writing, but which lack something vital?
What would you have to think about writing in order to think that the marks in the sand meant something?
Scenario #2
Walking down the street, you hear your name called clearly. You turn toward the sound, but find only empty space. It was just the breeze creating frequencies that happened to match your name perfectly.
What's the appropriate response here?
- Do you respond to the wind, finding out what it wants?
- Compliment its diction?
- Based on this acoustic coincidence, do you attribute dawning intelligence to the breeze?
- Or do you understand that, despite the perfect correspondence between sound and meaning, no actual communication occurred?
What would you have to think about speech in order to think that the sound you heard meant something?
The Missing Element
The difference between these natural accidents and genuine language comes down to the only thing that truly matters in writing: intention.
Without intention:
- Poetry becomes marks
- Speech becomes sound
- Writing becomes words in patterns
This is precisely what we're witnessing with large language models.
By design, they don't understand. They simply repeat words in statistically probable collocations. Hence they are commonly referred to as stochastic parrots. Like parrots, LLMs only mimic language. But they never produce it because they lack its essential ingredient – the one that turns sounds to speech, marks to words – namely, intention.
The Uncanny Valley of AI Writing
Writers who substitute their own intuition and judgement for these pattern-matching systems never escape the uncanny valley.
Their outputs may look like writing, may even sound impressive, but it always rings hollow. All vase, no flowers.
If you consider LLM outputs to be genuine writing—something meaningful, which you'd not be embarrassed to share with readers—you're making the same fundamental error as someone who:
- Engages the wind in conversation
- Submits random marks in sand to literary magazines
- Brainstorms with or seeks editorial feedback from a parrot
You're imagining intention where none exists.
A Practical Perspective
Now, if you've found these tools helpful in your writing process, I won't try to convince you otherwise.
But I'd like to suggest that you don't need it.
You might just as well look to cloud formations for feedback on your latest draft, or consult a bowl of alphabet soup for suggestions.
Despite their randomness, you might still find these "tools" helpful in sparking ideas for your next piece–in which case, you're welcome to join the chorus of AI advocates.
But if you'd find it absurd for writers to consider cloud-watching or soup-stirring as legitimate writing techniques, then you understand perfectly my skepticism about AI writing tools.