site banner

Culture War Roundup for the week of December 22, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

That's not the point. LLMs would make many people's writing stronger (for some value of "strong"). I'd rather read your writing, weaker or not. Now when I read you, every point you make, every turn of phrase, every word choice, I don't know if it was you or the LLM. Sure, maybe 80% to 90% of it was you. I can't know, and that makes me not care. I can prompt ChatGPT for its sparkling shiny opinions all day long.

I genuinely do not understand the intuition at play here. Let's imagine someone who has an instinctual aversion to the use of AI image gen: is using Adobe Firefly to change a single pixel with it sufficient to taint a larger painting? Two pixels? Ten? To finish the blocked-in background that the artist would have been too lazy to finish had he not had the tools at hand?

What if the artist deletes the AI pixel and reinserts one himself, with the exact same hexcode?

(It is worth noting that at one point, in the not so distant past, that even Photoshop itself was treated with similar suspicion)

Where is your threshold for "too much"? When you recognize an AI fingerprint? The problem is that once you begin suspecting it in a particular user, it is easy to imagine that there is more of it than in reality. Of course, if you have an all-or-nothing attitude, then I suppose that sounds less horrible to you than it does to me. I skew closer to a linear-no-threshold model, or perhaps one where, for the average writer, there exists an x% of AI usage that will increase overall quality as measured by multiple observers. Preferably blinded ones.

This x% can be very high for the truly average. I'm talking average Redditor. It can be very low, vanishingly so for others. Scott has mentioned that he has tried using LLMs to imitate his own style and has been consistently disappointed in the outcome.

I think, for me, the optimal amount is 1-10%. 20% is pushing it. This essay is closer to 20%. But even that 20% is closely vetted for factuality. Alas, it has not been vetted for style as hard, or else this topic wouldn't have arisen. In fact, I didn't particularly try. Performing edits to launder AI commentary as my own strikes me as dishonest.

I envision myself as the artist using the tool to finish painting that unfinished background. Sometimes, it makes something so good it's worth bringing to prominence in the foreground. The day where I can see no conceivable value-add from my own contribution is when I pack my bags as a writer. I suppose it is fortunate that I've been at it so long that there is a sizeable corpus of time stamped, archived evidence showing that I am damn good without it. That I don't need it. I still think I benefit from it, though I'm not sure I can change your mind on the topic.

After all, there are a lot of people making pure slop. I try not to ever become one of them.

There's a minor scandal in the tumblr video game sphere, because Studio Larian discussed the use of AI tools in the development pipeline. It's not clear exactly what they were using the tools for, but most critiques have interpreted it as only using AI-gen for concept art that won't even get a pixel in the final game, and they're still very unhappy with it.

((I've been trying to come together with a top-level post on the topic, but I dunno if it'll be interesting enough or if it'll be me going full Gelman Amnesia given that we have actual video game artist experts around.))

That's a shame they're being shamed. One of my takeaways from GPT-4 was that it was good enough to beat a lot of video game text and dialogue. Filler content, conversation with NPC #987, and side quests? AI can jazz up things budget doesn't allow for. It should no longer be cost prohibitive to develop the 120 filler fetch quests into something slightly more meaningful and engaging. Extra flair, storytelling, or character development where there was barebones effort. Someone needs to weather the criticism, raise the bar, and get paid for it.

Sometimes some scuffed lazy writing for NPC #69420 is exactly what you need, rather than chatgpt's relentless and breathless "flair." NPC #69420 is lazy and tired and has no time for your bullshit, just like the underpaid and overworked human scriptwriter for the game.

Disagree on all accounts. When AI does content, it’s ultra generic, has no sense of tone or effect, and lacks any of the idiosyncratic spontaneity of even sloppily put together human content. There’s also a lack of broader complexity, meaning that any ‘character development’ it’s adding to a given story isn’t corresponding to a grander vision of what that story aspires to be or is about, but is essentially just the cut and pasted clichés, tropes, and emblems of other works that only incidentally contains some of their meaning. I’d prefer content to be cut rather than given over to AI.

At least some of the sameness of AI writing reflects misuse or unfamiliarity with the tools, rather than a deeper problem with the technology itself. It's not hard to give different flavors to different characters. It actually takes some effort to avoid going too hard on it. Completely avoiding the dread em-dash requires a bit more effort, and keeping a world consistent requires a decent (set of) lore bibles, but it's definitely possible and a bit easier than building and keeping coherent a more serious effort by human writers. The more complete your lore bible, the more the LLM can give the appearance of talking about a consistent world.

It still needs human review -- I left in a prompt for the foxman merchant version that has a logical error with a pronoun, and did regen one response for the hedge mage that had a sentence structure error giving bear tails a claw -- but that's a lot faster than manual writing even for that.

ultra generic, has no sense of tone or effect, and lacks any of the idiosyncratic spontaneity of even sloppily put together human content... a lack of broader complexity, meaning that any ‘character development’ it’s adding to a given story isn’t corresponding to a grander vision of what that story aspires to be or is about

That's what I think about many video games! Given that RPGs were never my forte, but I have played enough games. I've even played enough of recent-ish titles.

Take any Bethesda game or probably most other open-world titles. They all have mountains of generic filler called content that doesn't get cut despite being generic filler. The content doesn't get cut, because it needs to be there. Players like wandering into an interaction and they like doing the thing. That's the appeal. Developers can reward players with do-the-thing-get-thing reward and writers reward players to do the thing because they've become invested in some story or consequence behind it. A great game rewards a player with a dopamine did-thing-got-thing and it rewards the player with an engaging story. There are not very many great games and there's only so many opportunities for great writing in a given game.

In Starfield, there's a common loop. Player meets character NPC. Player may have up to 4 distinct interactions with the character. Possibly one or two of those interactions have 2 different variations. The player is provided with a few sentences of backstory in some way, then the player is expected to recognize the shape of a familiar story and fill in the blanks. These storytelling opportunities come a few phases translated to video game format:

  • Introduction: "Woah, hey there! I'm Sam, the elderly forgotten veteran who runs the goompiunk shop now ever since my wife passed away. I do miss Marla. I'd love tell you a story about the time in the space war, but no one cares about old Sam anymore. Not since those dastardly Space Pirates showed up 150 meters to the North East at Ugorts Bar. They've been coming around every week to extort me. Last time they smashed my favorite picture of Marla here and..."
  • Quest complete: "Thanks stranger. You know whispers Old Sam has an old Vorseork Blaster from the Second Grand War in the back of the shop. I don't think Old Sam will be needing it since you've dealt with those Marituzen thugs at Ugorts Bar."
  • Repeat process until story is exhausted. Loop the last dialogue option.

There's nothing in there that can't be improved upon by a writer working with an LLM. If nothing else, this results in the player being provided the opportunity to add depth to a bland and boring A-B experience. The generic shape of the story, where the player is expected to recognize it to fill in the blanks, gets more filling.

Starfield is a bad game, but Starfield had so many of these generic fetch quests, generic storylines, generic dialogues that I don't think I got close to finish it. And hey, I know this developer, I expect some level of generic human slop, but boy did it seem bad. On the other side I've also played most of Baldur's Gate 3. BG3 is a much higher quality game. As I understand it's considered a generational class of game. My generation of this genre would be The Witcher 3 which, as I recall, had relatively rich stories and writing in part. There's still plenty of bland, formulaic, or marginal content that wouldn't be harmed by curated robot slop.

Now I could not be aware of the new fangled indie RPGs true gamers play these days, but I have played enough games to know the writers phone it in no more than a good prompt. That may be due to a workload as is typical for the industry or it might be that video game writers write games for a reason other than greatness. In either case I bet there's a use case for this now. Today! Someone could go find banal interaction in a game, feed a few prompts, and get something that enriches that experience. No question in my mind. AI will not single-handedly create a cohesive BG3 story board and 100 hours of dialogue in one go, but even a free model can help a mediocre writer enrich their 15 minute mini-story side quest #121.

There's nothing in there that can't be improved upon by a writer working with an LLM.

There's nothing in there that can't be improved as prose, but are you entirely sure that the changes will be improvements as game writing?

I like Table Top RPGs, despite them being worse than some Computer RPGs in every way but one, and the one way they're better is the way that matters here: in a TTRPG, your players don't have to be railroaded nearly so strictly. When the players try to dig deep into the interactions with some character, there can always be something rewarding they can dig deep into. Once the Game Master runs out of official quest writeup material, he can start to improvise, and those improvisations can actually affect all subsequent gameplay. It's quite common for players to develop an attachment to someone like that elderly forgotten veteran NPC, who the GM can then slot into other parts of the story, on the fly, as a recurring side character, making the story much more fun and interesting. In the longest-running game I run, my players have one originally-mid-level mook who's managed to escape enough fights to become a recurring villain (with some hilarious banter), and even have another three mooks who (via vast interleaved efforts of diplomacy and subterfuge) they've managed to semi-reform and (despite some lingering head-butting with PCs and each other) recruit as underlings. The written adventures for this campaign included some designed-as-recurring-character NPC friends and villains, too, of course, but these four were all characters who were written with at most a short backstory but who were expected to be eliminated in the first encounter if the players had been aggressive enough and their dice rolls lucky enough. We're all glad they weren't.

In a CRPG ... do you want to let the AI rewrite your game on the fly, like a GM does, not just write things you can review in advance? Writing on the fly is probably an AGI-complete problem. If you've got an LLM that you trust not to make its part of your game worse than your part then you might as well let it write your part too. But if all your writing is done in advance, that won't let you have long-term effects on the story. The possibilities you'd have to write for grow exponentially with elapsed gameplay, as more story elements arise and more combinations in which they might affect Ascended Extras' actions accumulate. If you instead do a lot of writing in advance without letting the now-fleshed-out side characters have long-term effects on the story, that just tricks the player with false affordances: instead of interacting with a world where ten characters have deep dialogue trees and obviously are critical to the story and another hundred characters quickly get to a loop with nothing new to say and are obviously scenery after that, you'd be giving them a world where ten characters have deep dialogue trees and are critical to the story while another hundred characters have deep dialogue trees but are still going to be plot dead-ends after those trees are finally exhausted.

Roger Ebert infamously took the stance that "video games can never be art", which was nonsense, but the interactivity of games is a bit of a two-edged sword: on the one hand it's an additional capacity that can make video games much better art than non-interactive media, but on the other hand it puts the artist even more at the mercy of the audience than is the case in other media. Someone may fail to understand what you intended them to understand from your painting, but at least once they're part of your painting's audience they'll see what you intended them to see. If you want to make art in the form of a game, however, everyone in your audience is also your collaborator, and your job isn't just to make them understand a finished product, it's to guide them into helping properly finish that product with you, and part of that guidance is making it easier to see which parts of the work they should focus on the most and which are just intended to be out-of-focus background. Making the background more beautiful would be an improvement, all other things being equal, but making it more beautiful without accidentally bringing it to a spot in the foreground where it shouldn't be is much trickier. The reason why new fiction writers always have to be told to be unafraid to "kill your darlings" is that it's true but non-obvious that most authors' writing can be best improved not by expanding it but by cutting it, removing the digressions and infodumps and red herrings and detached side plots and on and on until you're left only with the things that most contribute to the story. Game writers (and level designers, and so on) have a much harder problem, because even if you avoid handing the player a pointless distraction the player might seek it out anyway, and they'll enjoy the game less as a result even if they don't understand why. I recommend playing the Half Life 2 Episode 1+2 with Director's Commentary - some of the most interesting tidbits there are tricks with which they coax players into actions as simple as looking in the right direction at the right time to see a scripted event, while not actually taking any control away from the player or even letting most players realize they'd been maneuvered into making the decisions they did.

but are you entirely sure that the changes will be improvements as game writing?

Absolutely not. I'm just pretty sure games haven't perfected process and that there is for improvement by using AI during their creation. No game developer should trust the consumer's wisdom. I would recommend they ignore gamers the most.

do you want to let the AI rewrite your game on the fly, like a GM does, not just write things you can review in advance?

If it does it well because it's powered by the 200 million dollar Unreal Storytelling Engine, yeah, I'd try it out. I don't think we need procedurally generated stuff to have more choice and consequence in a game as mentioned. The best practices reasoning ignores all the limitations, constraints, industry standards, general audience, and on and on. Pressures that can be mitigated by an always on, if not always deferred to, modest writer.*

a world where ten characters have deep dialogue trees and are critical to the story while another hundred characters have deep dialogue trees but are still going to be plot dead-ends after those trees are finally exhausted

Can we not provides clues or even tell the player what they can expect from the 100 'non-essential' characters? Don't many games already do this when they choose to include such content in the experience?

I recommend playing the Half Life 2 Episode 1+2 with Director's Commentary

I don't need to they were clearly an innovator if not a pioneer in "show don't tell" for video games. I'm not saying to take creators, artists, and innovators out of the equation or to cede your storytelling role to a robot. Guard rails are good and necessary in many cases. Based on my experience with LLM (prose, given) and my experience with video games: there's already a role for tapping LLMs and it would be an improvement in many games-- including big budget titles. But I appreciated the perspective!

Starfield was heavily panned as having by far the worst writing out of all Bethesda games. In a game like skyrim even the useless fetch quests have some variety in the framing

Starfield’s writing is no worse than that of Skyrim, Oblivion, or Fallout 3/4. Morrowind also had bad quest writing but was elevated by the the weird fiction aspects, some good worldbuilding, and narrative constraints imposed by a tight deadline that mercifully limited quest text volume.

I was thinking of adding some caveats more or less for what you’ve mentioned. Games that aspire to create extremely vast world’s for the sole purpose of escapism, games that focus more on modularity and repetition than complex storytelling, games that have some sort of unique visionary use of AI in mind.

And I could see content created by writers who make use of AI being OK as well. The artist can compensate for the AI‘s weaknesses while using it selectively as a specialized, rather than all purpose, tool. As AI does have some strengths, mainly as a search tool. It’s hugely deficient if you ask it to be creative though, and I don’t believe it’s doing much more than merely copy pasting content from its database.

In any case, I resisted this impulse on the basis that most of this stuff seems either degenerate or a suspect.

Funnily enough, I was thinking exactly of the Elder Scrolls as a series that in some cases has produced slop content of a hugely superior calibre to AI. I speak of Morrowind here. There is no way AI by itself would ever come up with exceptional banalities like three naked Nord barbarians who’ve been identically robbed by witches or dirt farmers giving you the exact same encyclopaedic digressions on regional geography anytime you ask. AI can give you banalities all right, but not of any variety that suggests an underlying meaning or humour to it. These aren’t examples of genius, but they are examples of the kind of colour and charm that humans will give you even in the service of creating slop, and in both cases, they work not just as bland content, but as stuff that enhances the actual quality of the game. AI can’t resist reverting to the generic, so it would entirely come down to human creators to invent these sorts of bizarre outputs. In fact, in many cases I think it’s the twists and turns of the creative processes themselves that humans have to go through that leads them down these unlikely avenues. Case in point, the absurdity in these examples was a direct product of what humans had to do to try and overcome their material limitations.

There is no way AI by itself would ever come up with exceptional banalities like three naked Nord barbarians who’ve been identically robbed by witches or dirt farmers giving you the exact same encyclopaedic digressions on regional geography anytime you ask.

No it wouldn't come up with that from scratch. That sounds like the magic sauce. The intersection of absurdist yet earnest charming humor, games are meant to be fun mindset, and creative freedom. LLMs can imitate the style. As a rule I try not to share LLM outputs, because it isn't interesting, but try variations of this prompt:

Create a short Grand Theft Auto radio segment script/text. Involve Lazlow the in game radio personality and the game's iconic humor.

I added a word here or there, including an attempt with "nihilistic" added to iconic. Only one of them was awful-awful. Claude even made me smile with one bit. It may have been a real bit, who knows? You could work on the prompt and continue to pay a comedian to work the material to end up with 5x Lazlow radio segments. Or, consider this recent viral Twitter post on output allegedly spawned from a single prompt. But I'm devolving us down to is AI good which has been done to death.

I suspect LLM's are already quite capable to assist in weak areas (as I see them) of mediocrity in video games that require text. If a human is already doing things at a surface level, like the elderly veteran trope, then AI can provide more styled meat or ideas for the meal.

There are definitely some hysterics who can't stand AI touching anything whatsoever. And like I've said before, if you integrate AI into your work smoothly enough that we can't tell, well, we can't tell. But I think just about everyone who read @self_made_human's OP could spot the AI signature.

That's fair, and further I like to think keeping a "you have to actually read and rewrite the AI's output" principle is optimistically going to get a best-of-both-worlds situation where the human's writing benefits from the machine's access to information, and pessimistically at least reduces some of the spam potential. But I will caveat that you're vastly overestimating the ability of the casual reader to spot AI signatures without a very high false positive ratio.

"you have to actually read and rewrite the AI's output"

I think one of the issues is that people won't read the AI's output.

So earlier this year, I was applying for jobs - originally, I wrote each cover letter individually for each posting. This slowly made me suicidal, as spending 15-30 minutes per job application where I was unlikely to even hear back from the majority of them was soul destroying. The next thing I did was take a "template" cover letter, and swap out a few things (so like, in my "accomplishments" section, I'd rewrite it to emphasize the skills the job requested). This took around 5 minutes per job application, and was still soul destroying, because I still wasn't hearing back from very many jobs. So eventually I started pointing ChatGPT at the cover letters, and I promised that I'd rewrite it every single time.

Well, that lasted around 5 attempts until I basically got sick of it and started skimming. I went and took a look at some of the cover letters I "wrote", and about a third of them have obvious ChatGPT-isms like emdashes, that specific phrasing half-fawning phrasing that ChatGPT uses, etc. Thank god resumes were being read by LLMs too, or I'd never have gotten a new position.

Humans are lazy - they're going to take the path of least resistance every time. They'll claim that they read the whole thing, and for some definition of "read", they will have - but they'll be stuck with the LLM's phrasing and concepts.

Here's an example I fed into ChatGPT for rephrasing (my words first):

LLMs will introduce their own biases into the resultant writing. If everyone uses them, this will lead to less ideological diversity as every person will be essentially arguing as an AI, not as a human.

Because LLMs impose their own biases on generated text, universal use could erode ideological diversity, with people arguing through an AI lens instead of their own.

You can obviously tell that the concept is the same, but there are subtle differences in the meanings. If I were writing a larger text, I'd probably accept the AI text as "essentially the same thing" - but they're not. My text is much more emphatic about it being what will happen, whereas the LLM text is downplaying it. Multiply this by a much larger text, and you have an entirely different emphasis.

I think just about everyone who read the OP could spot the AI signature.

I will register disagreement with this broad assertion.

Where is your threshold for "too much"?

I know it when I see it, and when I see AI writing, it's too much.

Come on, spare me the "But what about PHOTOSHOP????? What about SPELLCHECKERS????" I am not an AI newb, nor an AI-hater. But you should not be using AI to generate your words for posting here. That is my opinion, and it will remain my opinion.

After all, there are a lot of people making pure slop. I try not to ever become one of them.

Just 10%-20% slop. That's too much slop.

Come on, spare me the "But what about PHOTOSHOP????? What about SPELLCHECKERS????" I am not an AI newb, nor an AI-hater. But you should not be using AI to generate your words for posting here. That is my opinion, and it will remain my opinion.

I'm wounded that you think my argument is as unsubtle as that. What I intended to get across is that a black-or-white approach is closer to an article of fate. The real world is not made of pixels, it is made of atoms (or wave functions or...) which do not come with convenient metadata attesting to origin. Even a digital pixel can produce the same outcome, and so can the larger arrangements of pixels, regardless of whether meat or machine or meat machines placed them. I care about the image, not the brush. Eventually, knowing that there was (or wasn't) a brush will not add much information, or at least pragmatically valuable information. Just a Planck Time later (as implied by the Intermediate Value Theorem), the brush will be an active detriment. Are we there? I suspect we are oh so close.

I am powerless to change your opinion here, but know I do what I do for principled reasons and not laziness. You assume the slop will stay slop. It will be better than you, or me, sooner than is comfortable.

I am powerless to change your opinion here, but know I do what I do for principled reasons and not laziness. You assume the slop will stay slop. It will be better than you, or me, sooner than is comfortable.

When AGI happens, I'll read its output.

If it's good enough that I can't tell, whatever. It is what it is.

Right now, I can still tell.

On a personal level, I write as a hobby with pretensions of someday being published. I would never use AI for my fiction writing, even if you could prove to me that the AI writes better than me, because what's the fucking point?

Will I use AI to draft recommendation letters and consumer complaints and letters of interest and the like? Sure, why not, it's probably an AI reading them.

But this place is for human interaction. If you're not using your own words, what's the fucking point?

On a personal level, I write as a hobby with pretensions of someday being published. I would never use AI for my fiction writing, even if you could prove to me that the AI writes better than me, because what's the fucking point?

Look, I wrote a novel (or a lot of it, it's unlikely to be finished at this rate) as an effort to prove that I am a genuinely competent writer, intentionally starting in 2023 when LLMs were becoming scary instead of today's scary-good. Nobody could accuse me of ghostwriting with them then, they were simply not good enough. These days, it is easy for me to go back to an older chapter, ask an AI to try rewriting it to be "better", and then having to (very grudgingly) accept that this version is superior.

I derive pleasure from both the creative release of writing, and from having my writing appreciated. I don't keep much of a private journal, I want this shit out there. When I'm truly gassed, I will probably write something, but in an artisanal capacity. It just won't be nearly as much.

But this place is for human interaction. If you're not using your own words, what's the fucking point?

Gestures back at previous arguments

What makes you think that there's no human interaction involved? Or, present tense? The intent of this particular post was to present a factual review of a news article, with added speculation where relevant (my speculation). The self_made_human house style was a secondary consideration. And here I am, using my very human words to engage. What is actually bad?

If I wanted to talk to an AI, I'd have it emulate the persona of a big-titted anime bimbo who aims to flatter and please me, and not the median poster of the Motte. QED.

Oh @crushedoranges san, you're so clever and handsome! And I agree with everything you just said!

Yes, yes. Finally the recognition I deserve.