I miss the text only reading era. This is a blog and should not need to have JavaScript enabled to render text to a page. I would rather not have to be annoyed by flavor of the month duplicate scroll bars, cookie banners, newsletter pop-ups 5 seconds in, scroll to the top pop-ups, idle overlays, highlight helper bars that break copy paste, etc. This blog didn't have all of those but had some. I'm sure the metrics look great because I had to load this page four times. First initially, and then disabling JavaScript and realizing it doesn't load anything at all. A third time re-enabling JavaScript and then deleting all the annoying elements, and then a fourth time to make sure my cosmetic filter is applied correctly. 4x the interactions! Must be doing something right.
You'll love GreaterWrong, then: https://www.greaterwrong.com/posts/BJ4pnropWdnzzgeJc/i-am-de...
I appreciate the sentiment, but this remark is far too common here. It does not address the content of the post and could apply to any submission from LessWrong. The author of the post has made none of those technical choices.
Funny, I assumed it was a subtle critique of how change always elicits haters. Though now I realize it was an honest non-sequiter to complain about his pet issue?
Honestly not sure anymore.
[delayed]
> I had to load this page four times. First initially, and then disabling JavaScript
Had to?
They gave enough detail that its clear from context what 'had to' meant.
I’ve completely avoided using AI for writing (although it looks like my coding avoidance is coming to an end). As someone who kind of views using a thesaurus as “cheating”¹, using AI to do the writing is way beyond the pale. A lot of what writing is about for me is about discovering and distilling and figuring out what I think. Take that away and I might as well just the spend the day watching television and playing video games and getting dumber by the minute.
I would go a step further, in fact, and when I’m writing something creative, I may choose to avoid whatever the autocomplete is suggesting as the next word (although I have it disabled in most contexts). People have a tendency to fall into grooves in their writing/speaking and this kind of acts as a reminder to not do that,³ although I’m far from immune myself (looking at my comment history, it’s upsetting to see the same verbal tics repeated when I have something to say).
⸻
1. If you don’t know a word well enough for it to come to mind when you’re looking for a word for something, you may not know it well enough to use it in your writing.²
2. Cue the people who will disagree. Suffice it to say that I occasionally will use a thesaurus to pull up a word that’s just out of reach, especially as my brain gets older and weaker, but even that I try to avoid.
3. When I got my MFA, there was a visiting writer who had published a creative writing book which was largely based on his former students’ transcriptions of his lectures. During the lecture he gave, even though he was speaking extemporaneously, he would speak word-for-word whole paragraphs from the book.
>I just wrote what my brain is instructing to type (might not reread it before posting)
Why would I put effort into reading something that had no effort put in by the author?
This guy needs an editor, AI or otherwise.
There are some people that believe that writing is an act of creative expression. In other words, that writing is primarily about the act (and as such, it's a quite selfish activity). Editing destroys the expressive act and must be avoided.
These people's writing is usually incoherent and they are very proud of it. If you've ever read a bad new-age self-help book you've probably encountered writing like this.
Good writers understand that writing is about communication. The initial act of writing (ie, word puke) is worthless. What matters most is a piece of writing's ability to communicate clearly.
This writing is usually pleasant, concise, and clear.
I 'm sure you consider your opinion to be correct, but there is something to writing being an act of creative expression. It's fine for it to be a selfish activity. Diaries are this way, for example, and the negativity you point at other people's hobbies is unfortunate.
There's something to the idea that if the writer is writing with the intention of publishing it, that should be edited. But if you're writing for yourself, and happen to simply keep your writings somewhere public, some other person's desire for you to edit more is a measurement of that other person's feeling of entitlement.
I have about as much desire to read some publisher's edited version of Anne Frank's diary as you appear to have to read the original.
"I think that is the beauty of writing, the raw , unedited emotions of the person behind every words either for entertainment or educational purposes, is what makes it special"
- the article, clearly expressing the intent of its own mistakes and contextualizing them in the era of LLM-borne "perfect" text
"I think that is the beauty of writing, the raw , unedited emotions of the person behind every words either for entertainment or educational purposes, is what makes it special"
This is not the beauty of writing. Everyone's writing needs editing. The "raw unedited emotions" are not something anyone wants to read, and this article is no exception.
The author tells us that English is their fourth language, which is certainly impressive. However their writing is messy and poorly constructed. It's difficult to read, and not at all enjoyable. The choice is not between doggerel like this, and LLM empty perfection.
I appreciate the sentiment, and good for him. However, from an audience perspective, why choose to watch a guy filming himself eating cereal with a shaky phone camera when you could watch The Sopranos? (or the latest MrBeast extravaganza, to avoid being pedantic).
I guess it's OK if you enjoy reading someone expressing himself without communicating anything valuable and well produced. It's kind of like people who enjoy stream-of-consciousness poetry or unhinged personal blog posts. It's fine.
But most of us (I think) read for our own gain, expecting substantial / stimulating text that is ideally well researched and serves a clear purpose.
Something like that needs an editor, effective proofreading, and quite some time of work and rework.
At this point, it is far more distracting to see LLM-isms and get completely thrown out of the reading-understanding process than to see some typos or grammatical errors. I actually feel reassured when I see something like a "they're/their" swap, because I know I am reading the author's thoughts instead of some linear algebra vaguely influenced by the author's thoughts.
Five years ago, I probably would have been annoyed by the same.
While I can get behind the sentiment I hope bad writing doesn't become the standard for anti AI. A simple grammar check would have greatly improved this post.
AI has plenty of training data on poor writing. If people start looking for bad grammar and typos to identify human articles, generative AI is certainly capable of spitting out prose that looks poorly edited.
I kind of hope the anti-AI-writing stuff passes and we can focus on what makes writing good or bad again instead of “this is clearly AI” posted in response to every blog. I actually don’t care if it’s AI but I do care if it’s worth reading and pleasant to read.
The relative value of those things are shifting. As the cost of polished LLM drivel falls to zero, some might prefer even the most unedited, off-the-cuff human writing to the slop.
What if the reality is that both are worthless? LLM slop is of no value, but human slop doesn’t gain value because fingers typed it.
I mean there's lots of room at the bottom. but part of the reason LLM slop seems to me so objectionable is its sameness; it's obviously drawn from the same thin manifold of the language. A human articulating their own thoughts, however those may be rendered on the page, at least realizes their own idiosyncratic region of the language. Writing one's own thoughts in one's own words declares the existence of one's own language, consonant with but distinct from all the others. Asserting one's individual voice and style, even if the content is worthless and the aesthetics objectionable maintains diversity in face of the LLM monoculture. We lament the lost apples, even the bitter ones; we don't ask the birds to each justify their differences.
Indeed. I for one enjoyed this piece. Yes, it had errors and lots of odd grammatical choices, but the reading remained affordably challenging and the prose had a newness to it.
> And for people who successfully taken back their creative writing skills, how did you do it?
“AI is one possible reference for my actual writing”. Generate info and perspectives, but only ever write stuff yourself. Something about this for me forces me to stay in my own “”writing voice”, at least personally, for the various places I use AI tech in. I think of the tech as a chess engine; they are better than any human player but I use them to help me gain perspective rather than cheat. Otherwise, why bother playing chess?
I certainly miss the pre-AI reading era.
So much content is just straight copy/pasted from the LLM now. Articles, blog posts, linked in posts, reddit comments, etc. Even just using the LLM for 'editing' tends to shift the voice to an obvious LLM voice when used naively. It is getting worse too. Last week a co-worker sent me a screenshot of Claude for me to review their "work", which was just whatever Claude made up.
Usually, if something is very obviously unfiltered LLM output, I just stop reading.
I do use LLMs for writing myself. They are useful, but are poor authors.
AI for editing is garbage. Chat to it to get ideas maybe, but in its current incarnation it’s just going to degrade anything you filter through it.
AI for editing is good and have many useful cases. The part where it fails is that the tone/style of the writing gets overtaken and reads like all other AI edited writing. But the quality of the edit is good, its just not in your style. When everyone sounds the same then there is no uniqueness. But using it edit legal letters, software documentation etc are very good use cases, using it to explain your ideas in a blog not so much.
I work mostly on the tech side of things but my corporate limitation has always been writing up documentation, communicating/translating to stakeholders, and recalling everything relevant when writing PR descriptions. AI has been a breath of fresh air. I actually communicate more information efficiently than I would have ever put the effort into before. I still maintain my own writing for more casual things like social media (HN included) and low stakes Slack conversations but AI for getting across ideas and then proofreading it is great.
I was asked to write user stories about a complex topic where I’m the SME at work. I spent two hours info dumping everything I knew about the project, everything the AI wouldn’t have any context for, using Cursor to add related projects to the workspace, tagging specific files where we’d implemented similar things with our styles, noted all the quirks of the system and how it works and where to find relevant information. I spent a lot of time on it, and then asked it to reach out using cli to grab relevant information from our infra, and write stories about how we’d accomplish everything I intend to get done. I then spent another few hours reviewing the 45 or so stories that conversation generated. It was similar to how I’d talk to a new contractor I’m onboarding to work on the work.
I have a deep knowledge of the information, have done the process we’re doing on two previous projects, but organizing all the stories would have been an absolute nightmare. I still spent half a day on this, I’d guess the fatigue from the boring parts would have made this take a week or maybe two, just because I was doing the parts I enjoy (knowing things and describing them) and I was able to offload the parts I’m not great at (using a lot of boilerplate language to organize the info I knew into scrum stories). Then I had a meeting, reviewed the stories with my coworkers, we had a discussion, deleted two or three of them that we determined weren’t necessary, and fixed up one or two where I’d provided insufficient information about some context surrounding coloring of a page.
It burned through a ton of Opus 4.6 tokens, looked through a ton of code (mostly that I’d written, pre-LLM), but has been amazing for helping me move into a lead position where grooming stories and being organized has always been my weakest point.
Also, when I wrote a postmortem for a deploy that had some issues, I wrote it all by hand. You have to know when the tools help and when they will hinder.
I thought it's quite good. Of course, I'm not taking 100% of output, but it takes care of my grammar blindspots (damn you commas and a/an/the articles!).
Can you please share what and how gets degraded? Sometimes I don't like a phrase it selects, but it's not common
Well, for one example, it inhibits your desire to improve against those very blind spots. In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
AI can take a rough draft, clean it up and shorten it as much as you want. The suggestions very often expose ambiguities in the original text. If you think the LLM got it wrong, it’s nearly often the LLM overreading some feature of the original that you failed to catch, which is precisely what you’d want out of your proofreader.
Yes, LLMs reduce the individual charm of prose, but the critique itself carries a romantic notion that we all loved the idiosyncratic failures of convention and meaning which went into highly identifiable personal styles, and which often go missing from LLM-edited work.
> Well, for one example, it inhibits your desire to improve against those very blind spots.
I'd contend this is not true. Even professional authors go to an editor who identifies things that need to be fixed. As the author of the text and knowing what it should be, it can be difficult to read what you wrote to find those mistakes.
> In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
This is not at all what is implied by having an AI act as an editor. Identifying misplaced commas, incorrect subject verb agreement (e.g. counts), and incomplete ideas left in as sentence fragments.
You appear to be implying that the author is giving agency to create the content to the AI rather than using it as a tool to act as a super-charged grammerly.
> Even professional authors go to an editor who identifies things that need to be fixed.
Yes, and these people are good at it. What’s your point?
If you need grammar checking, there are thousands of apps including word processors, web browsers and even most mobile devices that will check your inputs for grammar and spelling mistakes as you type. All of that without burning down the rainforests or neutering your thesis.
> it takes care of my grammar blindspots (damn you commas and a/an/the articles!)
There are plenty of pre-LLM tools that can fix grammar issues.
> Can you please share what and how gets degraded?
I'm not the person you asked, but IMO LLMs suck the style and voice out of the written word. It is the verbal equivalent of photos that show you an average of what people look like, see for example:
https://www.artfido.com/this-is-what-the-average-person-look...
As definitionally average the results are not bad but they are also entirely unremarkable, bland, milquetoast. Whether or not this result is a degradation will vary, of course, as some people write a lot worse than bland.
In many kinds of writing, perhaps most, communicating your state of mind to the reader is a primary goal. Even a smart LLM fundamentally degrades this, because to whatever degree that it has a mind it isn't shaped like yours or mine. I've had a number of experiences this year where I get to the end of a grammatical, well-structured technical document, only to find that it was completely useless because it recited a bunch of facts and analyses but failed to convey what the author was thinking as they wrote it.
(Of course, that may well be exactly what you're looking for if you're writing an audit report or something.)
>damn you commas and a/an/the articles
This sounds like an ESL issue. LLMs are good at proof reading ESL-written English text. They are not as good at proof reading experience English writers.
It’s kinda useful to me for the following three reasons:
- spelling - grammar or weird grammar as English is not my native language - read proofing and finding things that do not make sense in terms of sentence structure
I do not use it for ideas, discussing the writing, or anything else because that beats the purpose of writing it myself (creative writing).
Only if you don't understand how to control AI. If you understands how it works and have the skills to ride it like a wild horse, you can make yourself a 10x developer. Its maybe a bit of an insult, but you seriously have to change that mindset. AI is not going to be worse tomorrow. It will get better and it will dramatically change our life as developers. Code will no longer be a prominent thing we are working on in the near future.
I actually find Gmail a better editor/grammar check then LLMs. It makes isolated simplifications/corrections that imo have minimal style impact and just focus on clarifying phrasing.
Yeah Gmail got crazy good in the last 3 months, pretty sure it's LLM driven too but it went from 90s MS Word to better than Grammarly recently IMO
What does it say about me that when I run my writing through one of those "detect if AI" tools I seldom see a value of less than 70% confidence that the writing was AI generated?
I know this is a spicy take, but it probably just means you're more eloquent in your writing than most netizens...
And that's not really a hard bar to clear if you look at how people write comments online (including places like GitHub).
Anyone that uses punctuation, and capitalises words, probably automatically gets past the 70% confidence line.
It baffles me when I see ostensibly smart people refusing to click shift. Especially programmers. I know you can do it! I've seen you use curly brackets!
AI detectors don't work.
What it says (and this fact is not popular around here) is that you write better than the average person.
have you tried pangram? it's basically the only good AI detector, and they have nearly 0 false positives
> and they have nearly 0 false positives
I really don't see how this can be possible unless they're accepting abysmal recall? Perhaps I'm missing something fundamental here, but the idea that AI and non-AI assisted text can be separated with "nearly 0 false positives" just says to me that it's really just a filter for the weakest, most obvious AI generated text. Is that valuable?
Pangram's explicit pitch is extremely low false positives, accepting that a higher rate of false negatives is acceptable.
Simple: The derived variance in your word usage and sequences, is outside the mean distribution range, that would be labeled as AI generated, given this specific evaluation algorithm
It’s not nondeterministic
you can probably do the shannon entropy calculation yourself if you understand what the evaluation algorithm is
That said…if the evaluator is non-deterministic, then there’s no value in the estimate anyway
It probably means that your writing stylistically is close to the vector-space average of "good" writing, which is what AI produces.
FWIW, your comment history here does not look like AI at all to me, and I think I have a very (maybe too?) high sensitivity to AI slop.
I haven't tried my HN comments; I've only tried things spanning more than a few sentences and that I've put more effort into. I only discovered this when my son put an e-mail I wrote to his teacher that he was CC'd on into the tool on his school iPad.
try it with something published before 2022. do you still get the same results?
I really doubt those tools are good for anything
about you? not much. but i wouldnt spin up a blog, or even longer comments here, if you want to keep your sanity.
the amount of "that is obvious ai slop" comments i see on mine or other people's genuine non-ai writing has discouraged me from sharing anything more than roughly a paragraph for probably the rest of my life.
I want to emphasize a thought you expressed:
> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
What AI can't do is convey emotions.
>as if I were no longer able to judge, to decide, without consulting the AI
"the Whispering Earring" – https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
A friend described it as "there's no blank page any more".
depending how hard the "the brain is a muscle" saying applies, there is no way using LLMs/chatbot systems/AI is not going to deteriorate your brain immensely.
when i was younger, we didnt have cellphones. i had ~20-30 phone numbers memorized, at least. i also used to remember my credit card number. my brain has not deteriorated now that i have offloaded that to my phone.
point being: it depends on how you use it. if you offload critical thinking to ai, you will probably (slowly) atrophy your critical thinking muscles. if you offload some bullshit boilerplate or repetitive tasks or whatever, giving you more time overall to do the critical thinking part, you will be fine.
In I,Robot, Will Smith prefers to drive himself because he doesn't trust AI. But we are moving towards self driving as it would be more safer. Would you trust a calculation more if it was done by hand using log tables? Having vehicles allowed us to create sports like dirt bike riding or monster truck racing. Yes something is lost but something is also gained. We move up the layer of abstraction.
If your body is in good shape, stopping exercise won't make you deteriorate that quickly. What I wonder is, will people get in good shape in the first place.
What I mean is as someone with lots of experience, I don't care about me not learning about the basics anymore as much as someone in their 20s and 30s maybe should.
I think this is backwards.
Not sure what you mean by quickly. Back when I was in racing shape, if I stopped my training plan for as little as two weeks, (probably less actually, but I'm being conservative here) I would have a measurable drop in fitness.
Now, as someone who regularly walks the dog and bikes to work, I've got "less to lose" and probably wouldn't deteriorate as much.
Aerobic fitness is hard to shake, but neuromuscular changes can be lost very quickly
And here I was thinking how clever an example I was giving :)
See the recent article suggesting use of navigation apps may correlate in populations to increased Alzheimer’s. Will it happen to you? Maybe, maybe not. Life’s a box of chocolates!
Not joking, buy and read books. Old books are only written by people. (and the help of an editor)
Fun fact: Editors are usually also people. Except for that one dog I met during a cold winter's day in 1987 in a run-down London pub.
On the internet, no one knows you're an editor
No way, bro! I'm no longer an editor, though.
Or read magazines and newspapers from reputable publications. My grammar and writing have improved tremendously from reading quality magazine articles, e.g. stuff from The Atlantic or The NY Book Review or whatever.
Both magazines and books are valid forms of information consumption and books are not the only way to improve your writing, reading, and understanding of the world.
I wouldn't count on current stuff in those publications being free from AI. We're seeing it in peer-reviewed paper submissions so why not in literary forums?
If you limit yourself to stuff from maybe five years ago or older, yeah it's going to be human-written and human-edited (ghostwriting still possible).
Every now and then when I'm reading something, the writer will use a turn of phrase, a specific word, a metaphor, etc, that is unusually clever, or allows me to see the concept in some obtuse light. Or even, they are just able to choose the right words to make something sound musical or rhythmic in some pleasant way. It's intellectually delightful to come across these in writing.
I've never been surprised at AI writing. Emotion the biggest part of communication and these grey boxes have none.
I feel like asking it to polish or rewrite is going too far. Using it for a grammar/spell checker or thesaurus is fine, though. At least that preserves ones voice.
And I've definitely used it when I can't remember that one stinking word that I know exists and is perfect for this occasion.
> one stinking word
"hey robot give me every word even mildly related to $SOME_SENSE_ON_THE_TIP_OF_MY_TOUNGE" is a wildly satisfying and underrated experience.
I am definitely missing the pre-AI w̶r̶i̶t̶i̶n̶g̶ era
I am not a native speaker, for anything like HN comments I don't use AI, but I see no harm in using AI in correcting grammar and maybe some wording, but the ultimate change shouldn't be a copy-paste replacement, it should be well thought through by the author.
I think that AI will accelerate an already existing trend that pre dates AI meaning the global regression to the mean we're seeing in any creative field, from design to videogames, from cars to fashion.
Agree. This also ties into the hypothesis that we're hitting a local maximum in terms of the state of the art in creativity as we offload that work.
I find this similar to when photography was invented and painters moved away from realism trying to find originality and creativity and they produced modern art which for many of us just looks silly.
This is exactly same struggle for me. Writing technical content about PostgreSQL and balancing my voice without sounding like LLM written is genuinely difficult.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
I’ll take a clumsy sentence written by a non-native speaker any day over LLM generated mush. At least I know you chose those words specifically so it gives me some insight into your state of mind and intended meaning.
Any native English speaker who doesn’t live under a rock is very accustomed to reading and hearing English from non-native speakers and familiar with the common quirks and mistakes. English is quite forgiving as a language, we understand you. When in doubt, simplify it.
> English is quite forgiving as a language
it's a couple mutually-conflicting languages in a trenchcoat; forgiveness and flexibility are perhaps its defining properties.
To the broader issue: "polish" (in any language) is only valuable insofar as it makes the ideas clearer, attests to innate qualities of the author and/or the investment of their time, or carries its own aesthetic value. As LLMs make (a certain kind of polish) cheap to produce, the value of the middle category attenuates to nothing.
In some specific work contexts, such as writing pull request descriptions, not sounding like AI is something I've given up on trying to optimize. It's simply not worth the effort for me being non-native and writing detailed PR descriptions being so arduous, and the agent already has full context anyway. Obviously any fluff or inaccuracies are aggressively weeded out but I don't care anymore about the AI voice.
> any fluff or inaccuracies are aggressively weeded out
this work is paramount. Without clear evidence of human filtering, a long, well formatted message/PR/doc is likely to reduce my estimate of the value/veracity/relevance of its content.
This. My personal style have always been llm-like, including the generous use of em-dashes, and "not-only-this-that" style mannerisms. It' increasingly difficult to retain reputation.
Don't want to sound like an llm? Don't read llm content. Remove yourself from places where you might be liable to read it.
If you strictly read printed books only and am never exposed to online content, you'd think em-dash is a signal for human writing.
No you'd not think that. The thought of something not being human written didn't even occur to anyone before decent LLMs came around.
It's not that simple. LLMs were trained on lots of writing, and the "LLM voice" resembles in many ways good English prose, or at least effective public communications voice.
For years, even before LLMs, there have been trends of varied popularity to, for lack of a better word, regress - intentionally omitting capitalization, punctuation, or other important details which convey meaning. I rejected those, and likewise I reject the call to omit the emdash or otherwise alter my own manner of speaking - a manner cultivated through 30+ years of reading and writing English text.
If content is intellectually lacking, call that out, but I am absolutely sick of people calling out writing because they "think it's LLM-written". I'm sick of review tools giving false positives and calling students' work "AI written" because they used eloquent words instead of Up Goer Five[0] vocabulary.
I am just as afraid of a society where we all dumb ourselves down to not appear as machines as I am of one where machine-generated spam overtakes all human messaging.
Well that isn't what I am suggesting. I'm suggesting people ditch x. Reddit. Probably also ditch hn in the next couple months. If you can run a headless agent to post somewhere, just don't bother visiting that site, honestly a great rule of thumb right there.
That should leave you with media sources like nyt and your local library, which seems healthier to me. And maybe it might encourage a new type of forum to emerge where there is some decentralized vetting that you are a human, like verifying by inputting the random hash posted outside the local maker space.
On HN o Reddit you can occasionally read genuine opinions from real people. On newspaper 100% of text is trying to manipulate you.
> like nyt
I hope editorial departments everywhere are taking careful notes on the ars technica fiasco. Agree there's room for some kind of quick "verified human" checkmark. It would at least give readers the ability to quickly filter, and eliminate all the spurious "this sounds like vibeslop" accusations.
The bad part is that people may start writing a bit worse on purpose, just so they don't get read as AI.
> "LLM voice" resembles in many ways good English prose, or at least effective public communications voice.
It does not resembles that. It is usually grammatically correct writing, but it is also pretty ineffective writing bad writing with good gramar.
One of the most common criticisms is the use of the emdash. This is a classic bit of English prose that is not problematic except as a stereotype used to dismiss writing for form rather than for content.
Let's grab a few books off the shelf (literally).
Douglas Adams' The Hitchhiker's Guide to the Galaxy has four emdashes on the very first page:
> It is also the story of a book, a book called THGTTG - not an Earth book, never...
Isaac Asimov's classic The Last Question: three emdashes on the first page (as printed in The Complete Stories, Volume I)
> ...they knew what lay behind the cold, clicking, flashing face -- miles and miles of face -- of that giant computer.
Mark Z. Danielewski, House of Leaves: Three emdashes on page 1
> Much like its subject, The Navidson Record itself is also uneasily contained -- whether by category or lection.
Robert Caro, Master of the Senate: Five emdashes on page one
> Its drab tan damask walls...were unrelieved by even a single touch of color -- no painting, no mural -- or, seemingly, by any other ornament
Other pages 1s:
* Murakami - 1Q84: 1
* Murray/Cox - Apollo: 1
* Meadows - Thinking in Systems: 1
* Dostoyevsky - The Brothers Karamazov (Pevear/Volokhonsky translation): 4
* Caro - The Power Broker: 5
* Hofstadter - Godel, Escher, Bach - 3
Honestly, when I started this post I expected to have to dig deeper than page 1. The emdash is an important part of English-language literature and I reject the claim that we should ignore all writing that contains it.
No one is asking that we reject all prose with emdash. Not all emdash-users are LLMs, but many LLMs are profligate emdash-users, so adjust your priors accordingly.
Secondarily, I think there's a part of the discourse missing: the presence of a syntactic emdash in a sentence on the internet is not itself a strong signal of LLM-writing - but the presence of an actual emdash glyph (—) should raise some eyebrows, esp. in fora that aren't commonly authored in rich text editors (here, twitter, ...)
Before LLMs, the em-dash glyph was a decent tell simply that... the author was using a Mac, because it's a simple and easy-to-remember (or even guess!) key-combo on there. Not that you can't type it on other keyboards, but the Mac one for whatever reason had a combo of users-who-wanted-to-type-it and layout-that-makes-it-easy that resulted in a high proportion of correct em-dash employers being Mac users.
(option-underscore, or option-shift-dash if you prefer to think of it that way)
On iOS, you can type it by simply holding down on the "dash" button then selecting the em-dash from the list of options it presents. It may also correct double-dash to em-dash a lot of the time, not sure.
I have used the correct em-dash everywhere I can for over a decade, which amounts to nearly everywhere.
When a drunk chef dumps way too much salt into my ramen, the fact that good ramen also contains (more tastefully applied) salt redeems nothing!
If you outsource your thinking and skills, your ability to do either atrophies. You'll become dependent on outsourcing for both.
You're trading ability and competence for convenience.
For some reason I read that as "If you outsource your thinking and skills, your ability to do apostrophes."
>Although 80 % of the content was my own writing, the fact that it was run in a LLM enginee for grammar and vocabulary cross-check, made it failed the "probable written by AI " metric; and it was rejected.
should be:
>Although 80% of the content was my own writing, the fact that it was run through an LLM engine for grammar and vocabulary cross-checking meant that it failed the "probably written by AI" metric, and it was rejected.
1. 80 % -> 80%
2. in -> through
3. a LLM -> an LLM
4. enginee -> engine
5. cross-check -> cross-checking
6. cross-checking, -> cross-checking (removed the comma)
7. made it failed -> meant that it failed, (or "made it fail" depending on whether you want to preserve the past tense or preserve the word "made")
8. probable -> probably
9. by AI " -> by AI"
10. ; and it was -> , and it was (no need for a semicolon when linking with a conjunction like "and", and I would consider another word or phrase such as ", and, as a result, it was rejected" to emphasize the causal relationship between the clauses)
That's ten corrections that are fixing straightforward typos and/or grammar and vocab mistakes in one sentence. Most are fairly objective, though I can understand different opinions on 2, 7, or maybe 10.Relying on AI for editing seems to have atrophied the author's writing if that is what he or she thinks is worth publishing on a blog like this. I would suggest practicing editing your own work and not even thinking about passing it through AI (especially when you were told not to use any AI!) to edit for a while. Given that English is not your first (or even second or third) language, I would also suggest having a native speaker with some demonstrable writing skill review your writing and give feedback on how to make it more idiomatic. For example, writing being "run through an LLM" rather than "run in an LLM" is a relatively subtle difference compared to the others, and it's very very common for preposition mistakes like this to show up when writing in another language than your first. I am still hopeless with French prepositions.
Typos and minor grammatical errors within a well-reasoned piece are aesthetic now, means you didn't run it through an LLM...
The first sentence makes no sense.
Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".
I've started letting some run on sentences remain because it feels closer to how humans think and usually write. Letting typos go seems silly though.
When writing letters of recommendation now, I write in a more human tone to avoid sounding like a bot with a line of explanation at the start. Not an error in the sense you mean, but an error in tone for a letter of recommendation, certainly.
I don't know but capitalisation seems to have gone down the shitter.
Definitely think it is. It will be glorious. We will focus more on content than on just aesthetic as people try to signal that they are not llm
I feel like having to signal that you're a human detracts from the content side of things. Proper spelling and grammar, good style etc. are there to help you convey your ideas more accurately. Resorting to a stream of consciousness style of unrefined writing makes it apparent that you're a human, but the downside is that your text is bad.
Style is entirely subjective, and not every text is looking for a refined reader.
Oh no, I have had enough of people with quirky (i.e. cringey) writing on the internet. It started with those who refused to use their shift key and it's quickly devolving into something that makes you shiver when you read it. (Not to mention how easy it is to use a system prompt to make an AI write in whatever style you like.)
Flaw become aesthetic all time. People faked butt bandage follow Sun King fashion. Ugly as sin, still aesthetic.
lol
I see loads of LLM articles where it's been prompted to never capitalise, avoid full stops, pepper in spelling mistakes, etc. it sucks.
Maybe it is.
Just like hand made items are popular for their imperfections.
An awful lot of stuff in the "hand made" aesthetic are made by machine and factory too, and I suspect a similar thing will happen to any popular writing aesthetic that attempts to avoid being automated away.
Personally, I'll just continue to use my own voice. I try to correct spelling and grammar mistakes, and proof-read my writing before posting.
It's not perfect, and my writing can at times be idiosyncratic, but it's my voice and it's all I've got left.
But don't be mistaken in thinking that those mistakes make it better, it just makes it mine.
And because hands can still make things that machines cannot.
eg: https://ids.si.edu/ids/deliveryService?id=SAAM-2011.6_1
from: https://americanart.si.edu/artwork/mandara-79001 https://www.museumofglass.org/ltlg
I mean yes? I am more likely to read and trust something that is not written or cowritten by ai.
I want real humans giving real human opinions not ai giving their best opinion on what is the most "rewarding" weighted opinion
I never use an LLM to paraphrase my own voice as a matter of principle, but I’ve still been repeatedly accused of doing so because I happen to always have written structured posts, used “smart quotes,” and done that negative comparison thing (it’s genuinely not just fluff, it’s a genuinely useful way to— ah god damn it). Sigh.
Right. The LLMs' quirks aren't bad in themselves, they're bad when they're in every damn paragraph. They're mostly things that in moderation actually improve writing, and that if you see them once (without the knowledge that they're things LLMs do) would rightly tend to make you think better of the author. And so, of course, in RLHF training they get rewarded, and unfortunately it's not so easy for an LLM to learn "it's good to do this thing a bit but not too much.
The structured thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.
I refuse to cater to the "em dashes are AI" crowd.
And I was just noticing that my home-built blog render pipeline produces dumb quotes and that was embarrassing to me. Needs to be fixed.
(Counterpoint, dumb quotes are 7-bit clean and paste nicely... Hmm.)
> I refuse to cater to the "em dashes are AI" crowd.
I wrote a plugin for my blog that converts all hyphens (surrounded by whitespace) into em-dashes.
https://blog.nawaz.org/posts/2025/Dec/a-proclamation-regardi...
I feel ya. I've never been accused of using an LLM, fortunately, but depending on the context I do use “smart quotes” (even in „Dutch” or »German«) and the em-dash obviously… (And that ellips fella there. It's just so simple to type with a compose key set up.)
I thought the guillemet was French rather than German and the other way around.
German uses both kinds depending on the style and writer's preference. French has the guillemets the other way around.
(That Wikipedia table shows that too by the way.)
It's absolutely shocking how many people think that inverting all the quality metrics that we've traditionally used "because LLMs" will lead to good things. Nothing about this will end well.
Same here, I've always used em dashes and have been called out on negative comparisons – I didn't even know they were an LLM thing. Should I read more LLM to know what phraseology to avoid, or will doing that nudge me towards sounding more LLM? :-(
I have been writing stuff for a long time; my first internet experience was posting on forums about a Gameboy Advance game. Then in other forums, for a philosophy degree, and professionally as a copywriter and technical writer. I’ve been meaning to write up a post of my thoughts on writing and AI, but there things I’ve been thinking recently are:
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
I've been a Grammarly customer for quite some time, and I have tried the AI suggestions, but it always loses something and ends up with a whiny, apologetic tone.
AI always seems so verbose and wordy.
I am sorry but perhaps some use of AI or grammar-check would help? A lawn that's not overly manicured has its charm, but if it has one too many barren patches of clumps of overgrown grass, it doesn't appeal as much? This essay feels a bit like that.
I'd push back on the author and ask him really if his writing is getting worse or his standards have increased, leading to undue stress that might throw the flow state off.
Are there any good writing LLMs out there?
I get that the mainstream ones have been RLHF'd to death, but surely there must be others that are capable?
https://hemingwayapp.com/ gives you advice about your writing.
This is called Hemingway because he was apparently good at communicating efficiently which made him a popular author.
What happens if you take the output of a mainstream LLM and send it through this app? Would that solve the issue of the original article?
This is an interface, not an LLM. Do they say which LLM they use? Many of these are interfaces to one of the big three model providers. Others run through OpenRouter to use one of the better open models, all of which have their own quirks.
Can't you just... not do this?
I never passed any AI writing as my own. I would feel utterly awful. Also, I love tweaking words until they sound perfect.
The number of people who just nonchalantly admit that AI writes their messages is honestly scaring me.
It’s largely a problem of how these tools are packaged, but while it’s certainly nice to have an LLM check your spelling, or review your grammar or style or usage, you should never allow them to actually edit your document directly.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
Once I think something is AI I just can’t read it anymore. It isn’t out of principle or anything, I just become so distracted by the idea that I can’t focus or derive any benefit or pleasure from continuing.
This writing is terrible. I can't read it. But are people really unable to write without wanting to put it into an LLM? I haven't done a single piece of natural language writing with an LLM. The thought has never even crossed my mind. Why would I? Surely to give the LLM context of whatever I want to write about would amount to, you know, writing it down? Just write that "prompt" in your blog and send it. No need for LLMs.
> It means, I passed the text.
Ha. Well I guess you did, _this time_.
Can we not just ask an AI to correct our spelling mistakes and leave the rest alone?
For the most part, AI writing is pretty bad. It reads like a highschool kid trying to hit a minimum word requirement.
> This post, is written without any tools assistance I just wrote what my brain is instructing to type (might not reread it before posting).
How is the author complaining about the quality of their own writing while admitting to not even bothering reading what they wrote, let alone editing it?
(Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
Because they're self-aware perfectionists and are actively working to stop it, because they reach for all kinds of tools like grammar checkers and AI, but they're aware that using those will make the post lose "their" voice, or the human element of the post.
And that's, I think, a valid choice; you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry? That's for formal writing, and blog posts are not formal.
Reading what you write for editing does not make a text lose your voice. If anything, it amplifies it, you get to ensure that what you intended to say was said.
Not reading what you write smells more like laziness.
Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.
Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.
Great! That's a good thing. Embrace being human sometimes.
Plus, "lazy" would actually be just using AI to edit the writing.
> instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
LLM cant really do that. It can help you produce correct sentence where you struggle to create own, but it does not have capabilities to do what you suggest.
It sounds like you haven't tried.
LLMs definitely can do this. The output tends to be overly positive though, claiming that any sort of rough draft you give them is "great, almost ready for publishing!". But the feedback you can get on clarity, narrative flow, weak spots... _is_ usually pretty good.
Now, following that feedback to the letter is going to end up with a diluted message and boring voice, so it's up to you to do with the feedback whatever you think best.
What? LLMs are very capable of doing sentiment analysis. Hell, it's basically one of the things it actually excels at - understanding tone, nuance, context, etc.
I used it many times for exactly this, with good results. It points out ambiguous contructs, parts that are dissonant from the tone I intend, etc.
I have no idea why you think that LLMs can't do that lol
Sentiment analysis for the purpose of categorizing reddit comments, sure. For the purpose of giving you advice about nuance, overall clarity and tone of own long test, no.
I tried it myself, and it did actually a good job.
There's nothing magical about a long text you write yourself vs a stream o reddit comments in a thread. It's all sentiment analysis on text. It can extract ambiguity, how ideas are connected in the context, categorize and summarize, etc.
You should try it and see it for yourself. Feed it some large text of a single author and ask it to do those things, see if the results are satisfactory.
If you use grammar checker as a grammar checker, it wont make you loose your voice. It will make you use correct grammar.
> you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry
If it is dry, then it is not stylistically perfect. Per definition, dry writing is just an imperfect writing. Stylistically perfect writing does not have to be dry and usually is not dry.
What happens here is that people use "stylistically perfect" when they mean "followed a bad stylistic advice".
I see both sides here. Wanting to preserve your natural voice is valid, but editing and using tools don't necessarily take that away. In fact, they can help make your intended message clearer. It probably comes down to how much control you keep over the final result rather than wheater you use tools at all.
What annoys me here is that people say "I use AI as style checker to make my writing better" or claim that good writing is unfairly judged as being by AI ... and then proceed to describe inferior writing results they achieved with AI. None of what author wrote there signals that the way he uses AI made his writing better. His use of AI made his output inferior. And not just in a the "loosing own voice" way worst, but literally in the "the final text is less effective writing".
I do not mean this comment to be kick against AI. It is very good for some stuff, it is less good for other stuff. What annoys me is someone calling output superior while actually complaining about it being inferior.
Hey, maybe that llm needs to be used differently to achieve actually good writing results.
lose!
There is no reliable way to detect AI writing. It probably trains on texts known to be AI, on texts known to be written by humans, then classify the text according to this training.
The problem is that it has a pretty high false positive rate. Maybe it thinks it's AI because there are absolutely no spelling mistakes. Or maybe you're French and you use latin-roots words in English that are considered "too smart" for the average writer.
And the problem is that people run those tools, see "80% chance to be written by AI", and instead of considering that 20% is high enough to consider you don't know, will assume it's definitely written by AI.
Exactly. Depending on what nutrians I've been consuming, the Indians/intelligence in my head could also be artificial. Perhaps that's why I fail those captcha tests most of the time.
> Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
Grammarly has seriously started rewriting whole paragraphs recently, I have been having to reject more and more "prompts" where in the past I would accept them almost by default because they actually were Grammer checks.
What makes you think that? I presume that's just the authors (sarcastic) way to say "beware: may contain typos and grammatical errors".
There are a bunch of typos in there which jar a bit ('deterioted'), but I guess that makes sense for this specific article.
Personally, I would recommend them to simple use any old editor with spellchecking enabled. That suffices for most writing where you just want to keep your own voice. To me, the red crinkly line just means that I should edit that word myself. In the rare case where I'm stumped on the spelling I'll look at the suggested edit of course, but never as a matter of course.
The problem here, the overarching issue is that the subject complaint about AI slop is actually a bigger issue that has been plaguing America in particular for many years now, and of which the AI slop era is only a current top. The qualities of American writing have clearly been on a precipitous decline for a very long time now, predating AI slop and even spell checkers and computers.
Computers, digital text, and digital information distribution/transportation have made writing and thoughts cheap. Arguably due to what we are surely all aware of, humans rarely value that which is cheap, whether monetarily or in effort and consequential qualities. What people seem reluctant or maybe unable to acknowledge is that predating the current AI Slop, was what could be called Human Slop, low quality, low effort, careless output that was cheap; regardless of whether AI slop now outperforms.
It is why you are justified in pointing out that even in the post complaining about AI Slop, the human has apparently abandoned what would have been common practice in just the recent past, using basic spellcheckers or simply reviewing what was written and also practicing with deliberation; the art and skill of writing, grammar, and sentence structure.
No one is perfect and that is also what makes anything human, somewhat inexplicable and random variation. However, it takes a certain refinement before unique human character becomes a positive quality and is not just humans being sloppy ... human slop.
> The qualities of American writing have clearly been on a precipitous decline for a very long time now, predating AI slop and even spell checkers and computers.
https://www.literaturelust.com/post/what-writers-need-to-kno...
> Every NYT bestseller from 1960 to 2014 falls in the seventh-grade level spread, from 4th to 11th.
> ...
> Since 2000, only 2 bestsellers have scored higher than 9th-grade readability.
> ... ...
> The bestselling authors of our time are writing at the 4th-grade level.
> > “8 books tie for the lowest score,” a 4.4, just above 4th-grade level. Prolific, well-known authors with huge sales: James Patterson, Janet Evonvich, and Nora Roberts.”
> These three authors have written a combined total of 419 books.
Yes, these people are so unbelievably stupid that think others more intelligent than them can't tell when they use AI to write their stuff. And then they act so annoyed when they get exposed... It's unbearable.
The article here is still full of AI slop, and so many people in the comments are defending the author. Blows my mind.
Yeah, now it's "Here's what nobody else talks about" and "Here's the kicker" all day long.
wrong, or at least, slightlywrong, but not, lesswrong
you are missing the writing era, which is gone. whatever we have now will slowly congeal into cold grue that will get a name or names
the madness of bieng chastised for speakerphoning and disturbing people gulping the slop
what do we call that?
There is no grandiose "AI era". Or it started like in 1950s already.
What it is going to be is a 'Slop Decade' - a much better label if you insist on having one.
I remember taking a machine learning course in which the instructor explicitly warned us to make wise fiscal decisions, based on the assumption that ML funding follows a hype-driven boom/bust cycle.
"Save during the summers and you'll make it through the winters".
The slop decade will be a slop "rest of humanity." There's no going back from this.
I think some spaces will try to retain their value by actively combating LLMs, in the same way they combat hackers and trolls, and if they don't, they'll naturally die.
Several subreddits became AI slop submission repositories and their human engagement dwindled. Some subreddits that were inundated with AI slop implemented policies that ban it, and it seems to work well.
Strict no slop policies work, and surprisingly, so do rules that require AI submissions to be tagged as AI. Forcing slop slingers to tag their slop does a good job at discouraging said slop, it turns out that admitting your slop is slop is embarrassing or something.
No technology ever became obsolete?
Oh well, when the most powerful people on the planet manage to enshittify it enough, we'll be freed from AI...
Or maybe there'll be the elite enjoying the world, while the rest of us have to work manual labor. But at least it'll be AI systems ensuring our compliance!