Why AI writing is so generic, boring, and dangerous: Semantic ablation.
-
Depends what level of the corporation you''re speaking to. Management and marketing hype is in some ways the opposite, with a heavy use of "signaling" words that serve little informational purpose, rather they are meant to leave an impression. Think "disruptive", or "leverage" used as a verb...
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross It is impossible to replace the human experience with a machine. The moment by its nature is sancrosanct; it's only in this atmosphere of gaming real estate insanity where life's nature is just another bitcoin to earn where we have lost our way.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
I can’t help seeing in that elements of 1984 where Orwell describes successive reduction in vocabulary with the intended goal of making rebellious thought impossible
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross the new Newspeak
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross neat article, thanks.
I had a realization a while ago that LLM writing came at me with the same vibe I caught when I was briefly a teacher, and again in the workplace, where I dealt with people who had unacknowledged literacy challenges. Young folks who assembled written work by cribbing from others and rearranging words “by shape” to fulfill the requirements - always managed to convey zero meaningful thought.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
Of cause it does. So the result becomes more and more readable for the deliberately uneducated masses. Style? Content? Facts? Who cares?
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
If you use an LLM to make “objective” decisions or treat it like a reliable partner, you’re almost inevitably stepping into a script that you did not consent to: the optimized, legible, rational agent who behaves in ways that are easy to narrate and evaluate. If you step outside of that script, you can only be framed as incoherent.
That style can masquerade as truth because humans are pattern-matchers: we often read smoothness as competence and friction as failure. But rupture in the form of contradiction, uncertainty, “I don’t know yet,” or grief that doesn’t resolve is often is the truthful shape of the thing itself.
AI is part of the apparatus that makes truth feel like an aesthetic choice instead of a rupture. That optimization function operates as capture because it encourages you to keep talking to the AI in its format, where pain becomes language and language becomes manageable.
The only solution is to refuse legibility.
It's already beginning, where people speak the same words as always, but they don't mean the same things anymore from person to person.
New information from feedback that doesn't fit another's collapsed constraints for abstraction... can only be perceived as a threat. Because If you demand truth from a system whose objective is stability under stress, it will treat truth as destabilizing noise.
Reality is what makes a claim expensive. A model tries to make a claim cheap.
Systems that treat closure as safety will converge to smooth, repeatable outputs that erase the remainder. A useful intervention is one that increases the observer’s ability to detect and resist premature convergence by exposing the hidden cost of smoothness and reinstating a legitimate place for uncertainty, contradiction, and falsifiability. But the intervention only remains non-doctrinal if it produces discriminative practice, not portable slogans.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross by putting a measurable number on this feature, you have now made it possible to train out!
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross I've previously described LLM-generated text as reading like "a middle management memo that no-one bothers reading".

️ -
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross it's the textual equivalent of prions
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross Neural networks, by mathematical nature, are lossy information-compressing artefacts !
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross No surprise that we see the textual equivalent of mad cow disease.
-
-
-
N Marianne shared this topic
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross ironically got a Google cloud genAI and ML ad right in the middle of that.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
@cstross hmmm, that might also explain why AI seems more effective for code.
For the most part you want a reversion to the mean in code. Novel solutions are only needed at the cutting edge where you trying to make the computer do something that’s not been done before. -
@cstross hmmm, that might also explain why AI seems more effective for code.
For the most part you want a reversion to the mean in code. Novel solutions are only needed at the cutting edge where you trying to make the computer do something that’s not been done before.@Jmj Yes. Also I suspect the semantic expressiveness of programming languages is far narrower than that of human languages: they're more precise, but it's much harder (though not impossible!) to write poetry in them. So there's less risk of losing something unique by generating output that tends to occupy the middle of the bell curve.
-
@Jmj Yes. Also I suspect the semantic expressiveness of programming languages is far narrower than that of human languages: they're more precise, but it's much harder (though not impossible!) to write poetry in them. So there's less risk of losing something unique by generating output that tends to occupy the middle of the bell curve.
