This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I am aware. I find it most unfortunate, since I do genuinely believe that LLMs help make my writing even stronger.
That's not the point. LLMs would make many people's writing stronger (for some value of "strong"). I'd rather read your writing, weaker or not. Now when I read you, every point you make, every turn of phrase, every word choice, I don't know if it was you or the LLM. Sure, maybe 80% to 90% of it was you. I can't know, and that makes me not care. I can prompt ChatGPT for its sparkling shiny opinions all day long.
I genuinely do not understand the intuition at play here. Let's imagine someone who has an instinctual aversion to the use of AI image gen: is using Adobe Firefly to change a single pixel with it sufficient to taint a larger painting? Two pixels? Ten? To finish the blocked-in background that the artist would have been too lazy to finish had he not had the tools at hand?
What if the artist deletes the AI pixel and reinserts one himself, with the exact same hexcode?
(It is worth noting that at one point, in the not so distant past, that even Photoshop itself was treated with similar suspicion)
Where is your threshold for "too much"? When you recognize an AI fingerprint? The problem is that once you begin suspecting it in a particular user, it is easy to imagine that there is more of it than in reality. Of course, if you have an all-or-nothing attitude, then I suppose that sounds less horrible to you than it does to me. I skew closer to a linear-no-threshold model, or perhaps one where, for the average writer, there exists an x% of AI usage that will increase overall quality as measured by multiple observers. Preferably blinded ones.
This x% can be very high for the truly average. I'm talking average Redditor. It can be very low, vanishingly so for others. Scott has mentioned that he has tried using LLMs to imitate his own style and has been consistently disappointed in the outcome.
I think, for me, the optimal amount is 1-10%. 20% is pushing it. This essay is closer to 20%. But even that 20% is closely vetted for factuality. Alas, it has not been vetted for style as hard, or else this topic wouldn't have arisen. In fact, I didn't particularly try. Performing edits to launder AI commentary as my own strikes me as dishonest.
I envision myself as the artist using the tool to finish painting that unfinished background. Sometimes, it makes something so good it's worth bringing to prominence in the foreground. The day where I can see no conceivable value-add from my own contribution is when I pack my bags as a writer. I suppose it is fortunate that I've been at it so long that there is a sizeable corpus of time stamped, archived evidence showing that I am damn good without it. That I don't need it. I still think I benefit from it, though I'm not sure I can change your mind on the topic.
After all, there are a lot of people making pure slop. I try not to ever become one of them.
I know it when I see it, and when I see AI writing, it's too much.
Come on, spare me the "But what about PHOTOSHOP????? What about SPELLCHECKERS????" I am not an AI newb, nor an AI-hater. But you should not be using AI to generate your words for posting here. That is my opinion, and it will remain my opinion.
Just 10%-20% slop. That's too much slop.
I'm wounded that you think my argument is as unsubtle as that. What I intended to get across is that a black-or-white approach is closer to an article of fate. The real world is not made of pixels, it is made of atoms (or wave functions or...) which do not come with convenient metadata attesting to origin. Even a digital pixel can produce the same outcome, and so can the larger arrangements of pixels, regardless of whether meat or machine or meat machines placed them. I care about the image, not the brush. Eventually, knowing that there was (or wasn't) a brush will not add much information, or at least pragmatically valuable information. Just a Planck Time later (as implied by the Intermediate Value Theorem), the brush will be an active detriment. Are we there? I suspect we are oh so close.
I am powerless to change your opinion here, but know I do what I do for principled reasons and not laziness. You assume the slop will stay slop. It will be better than you, or me, sooner than is comfortable.
When AGI happens, I'll read its output.
If it's good enough that I can't tell, whatever. It is what it is.
Right now, I can still tell.
On a personal level, I write as a hobby with pretensions of someday being published. I would never use AI for my fiction writing, even if you could prove to me that the AI writes better than me, because what's the fucking point?
Will I use AI to draft recommendation letters and consumer complaints and letters of interest and the like? Sure, why not, it's probably an AI reading them.
But this place is for human interaction. If you're not using your own words, what's the fucking point?
Look, I wrote a novel (or a lot of it, it's unlikely to be finished at this rate) as an effort to prove that I am a genuinely competent writer, intentionally starting in 2023 when LLMs were becoming scary instead of today's scary-good. Nobody could accuse me of ghostwriting with them then, they were simply not good enough. These days, it is easy for me to go back to an older chapter, ask an AI to try rewriting it to be "better", and then having to (very grudgingly) accept that this version is superior.
I derive pleasure from both the creative release of writing, and from having my writing appreciated. I don't keep much of a private journal, I want this shit out there. When I'm truly gassed, I will probably write something, but in an artisanal capacity. It just won't be nearly as much.
Gestures back at previous arguments
What makes you think that there's no human interaction involved? Or, present tense? The intent of this particular post was to present a factual review of a news article, with added speculation where relevant (my speculation). The self_made_human house style was a secondary consideration. And here I am, using my very human words to engage. What is actually bad?
If I wanted to talk to an AI, I'd have it emulate the persona of a big-titted anime bimbo who aims to flatter and please me, and not the median poster of the Motte. QED.
Oh @crushedoranges san, you're so clever and handsome! And I agree with everything you just said!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link