site banner

Culture War Roundup for the week of January 12, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Award-Winning AIs

AlphaPolis, a Japanese light novel and manga publisher, announced that it has cancelled plans for the book publication and manga adaptation of the winner of its 18th AlphaPolis Fantasy Novel Awards’ Grand Prize and Reader’s Choice awards. The winning entry, Modest Skill “Tidying Up” is the Strongest! [... ed: subtitles removed], was discovered to be predominantly AI-generated, which goes against AlphaPolis’s updated contest guidelines.

To be fair, "best isekai light novel" is somewhere between 'overly narrow superlative' and 'damning with faint praise', and it's not clear exactly where how predominately AI-generated the writing is or what procedure the human involved used. My own experience has suggested that extant LLMs don't scale well to full short stories without constant direction every 600-1k words, but that is still a lot faster than writing outright, and there are plausible meta-prompt approaches that people have used with some success for coherence, if not necessarily for quality.

Well, that's just the slop-optimizing machine winning in a slop competition.

Prior to today, I had never heard of up-and-coming neo-soul act Sienna Rose before, but based on social media today, it seems a lot of people had—she’s got three songs in the Spotify top 50 and boasts a rapidly rising listener count that’s already well into the millions. She is also, importantly, not real. That’s right, the so-called “anonymous” R&B phenom with no social media presence, digital footprint, or discernible personal traits is AI generated. Who would’ve thunk?

It's a slightly higher standard than isekai (or country music), and Spotify is a much broader survey mechanism than Random Anime House, and a little easier to check for native English speakers. My tastes in music are... bad unusual, but the aigen seems... fine? Not amazing, by any means, and some artifacts, but neither does it seem certain that the billboard number is just bot activity.

Well, that's not the professional use!

Vincke shared that [Studio] Larian was openly embracing and using generative AI tools for its development processes on Divinity. Though he stated that no AI work would be in the game itself ("Everything is human actors; we're writing everything ourselves," Vincke told Bloomberg), Larian devs are, per his comments, using AI to insert placeholder text and generate concept art for the heavily anticipated RPG.

It's... hard to tell how much of this is an embarrassing truth specific to Studio Larian, or if it's just the first time someone said it out loud (and Larian did later claim to roll back some of it). Clair Obscur had a prestigious award revoked after the game turned out to have a handful of temporary assets that were AIgen left in a before-release-patch build. ARC Raiders uses a text-to-speech voice cloning tool for adaptive voice lines. But a studio known for its rich atmospheric character and setting art doing a thing is still a data point.

(and pointedly anti-AI artists have gotten to struggle with it and said they'd draw the line here or there. We'll see if that lasts.)

And that seems like just the start?

It's easy to train a LORA to insert your character or characters into parts of a scene, to draw a layout and consider how light would work, or to munge composition until it points characters the right way. StableDiffusion's initial release came with a bunch of oft-ignored helpers for classically extremely tedious problems like making a texture support seamless tiling. Diffusion-based upscaling would be hard to detect even with access to raw injest files. And, of course, DLSS is increasingly standard for AAA and even A-sized games, and it's gotten good enough that people are complaining that it's good. At the more experimental side, tools like TRELLIS and Hunyuan3D are now able to turn an image (or more reasonable, set of images) into a 3d model, and there's a small industry of specialized auto-rigging tools that theoretically could bring a set of images into a fully-featured video game character.

I don't know Blender enough to judge the outputs (except to say TRELLIS tends to give really holey models). A domain expert like @FCfromSSC might be able to give more light on this topic than I can.

Well, that's not the expert use!

Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters -- and that's not saying much -- than I do about python. It started out as my typical "google and do the monkey-see-monkey-do" kind of programming, but then I cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualizer.

That's a pretty standard git comment, these days, excepting the bit where anyone actually uses and potentially even pays for Antigravity. What's noteworthy is the user tag:

Signed-off-by: Linus Torvalds torvalds@linux-foundation.org

Assuming Torvalds hasn't been paid to advertise, that's a bit of a feather in the cap for AI codegen. The man is notoriously picky about code quality, even for small personal projects, and from a quick read-through (as an admitted python-anti-fan) that seems present here. That's a long way from being useful in a 'real' codebase, nor augmenting his skills in an area he knows well, nor duplicating his skills without his presence, but if you asked me whether I'd prefer to be recognized by a Japanese light novel award, Spotify's Top 50, or Linus Torvalds, I know which one I'd take.

My guesses for how quickly this stuff will progress haven't done great, but anyone got an over:under until a predominately-AI human-review-only commit makes it into the Linux kernel?

Well, that's just trivial stuff!

This page collects the various ways in which AI tools have contributed to the understanding of Erdős problems. Note that a single problem may appear multiple times in these lists.

I don't understand these questions. I don't understand the extent that I don't understand these questions. I'm guessing that some of the publicity is overstated, but I may not be able to evaluate even that. By their own assessment, the advocates of AI-solving Erdős problems people admit:

Erdős problems vary widely in difficulty (by several orders of magnitude), with a core of very interesting, but extremely difficult problems at one end of the spectrum, and a "long tail" of under-explored problems at the other, many of which are "low hanging fruit" that are very suitable for being attacked by current AI tools. Unfortunately, it is hard to tell in advance which category a given problem falls into, short of an expert literature review.

So it may not even matter. There are a number of red circles, representing failures, and even some green circles of 'success' come with the caveat that the problem was already-solved or even already-solved in a suspiciously similar manner.

Still a lot smarter about better at it than I am.

Okay, that's the culture. Where's the war?

TEGAKI is a small Japanese art upload site, recently opened to (and then immediately overwhelmed by) widespread applause. Its main offerings are pretty clear:

Illustration SNS with Complete Ban on Generative AI ・Hand-drawn only (Generative AI completely prohibited, CG works are OK) ・Timelapse-based authentication system to prove it's "genuinely hand-drawn" ・Detailed statistics function for each post (referral sources and more planned for implementation)

That's a reasonable and useful service, and if they can manage to pull if off at scale - admittedly a difficult task they don't seem to be solving very well given the current 'maintenance' has a completion estimate of gfl - I could see it taking off. If it doesn't, it describes probably the only plausible (if still imperfect) approach to distinguish AI and human artwork, as AI models are increasingly breaking through limits that gave them their obvious 'tells', and workflows like ControlNet or long inpainting work have made once-unimaginably-complex descriptions now readily available.

That's not the punchline. This is the punchline:

【Regarding AI Use in Development】 To state the conclusion upfront: We are using coding AI for development, maintenance, and operational support. ・Integrated Development Environment: Cursor Editor・Coding: ClaudeCode・Code Review: CodeRabbit We are using these services. We have no plans to discontinue their use.

@Porean asked "To which tribe shall the gift of AI fall?" and that was an interesting question a whole (/checks notes/) three years ago. Today, the answer is a bit of a 'mu': the different tribes might rally around flags of "AI" and "anti-AI", but that's not actually going to tell you whether they're using it, nevermind if those uses are beneficial.

In September 2014, XKCD proposed that an algorithm to identify whether a picture contains a bird would take a team of researchers five years. YOLO made that available on a single desktop by 2018, in the sense that I could and did implement training from scratch, personally. A decade after XKCD 1425, you can buy equipment running (heavily stripped-down) equivalents or alternative approaches off the shelf default-on; your cell phone probably does it on someone's server unless you turn cloud functionality it off, and might even then. People who loathe image diffusers love auto-caption assistance that's based around CLIP. Google's default search tool puts an LLM output at the top, and while it was rightfully derided for nearly as year as terrible llama-level output, it's actually gotten good enough in recent months I've started to see anti-AI people use it.

This post used AI translation, because that's default-on for Twitter. I haven't thrown it to ChatGPT or Grok to check whether it's readable or has a coherent theme. Dunno whether it would match my intended them better, or worse, to do so.

My guesses for how quickly this stuff will progress haven't done great

Reading over your prediction, are you certain that it was wrong, or that you wildly underestimated how much money people were willing to throw at the problem despite the quadratic curve? My personal prediction was that we'd see moderate efficiency gains that would help, but I never thought we'd wrap up over a full percent of the US GDP in it.

It'd be a convenient dodge, but even if I were willing to take it, it'd probably would still be wrong. In particular, I was under the impression that no amount of money could flatten the quadrilateral explosion in regards to Attention and Context, and it turned out to not only be doable, but doable in forms that could run on a single computer in my house. (Indeed, at 30B, a single computer in my house can go up to 1m tokens), aka off by 20x my 50k token estimate. It's not free and the software development came from money as much as it did obsessives poking at theory, but it's not like ChatGPT solved it just primarily or even predominately by throwing GPU cycles at the thing.

AI artistic successes are indicative of survivorship bias. The way their creators operate is by spamming vast amounts of works and seeing what sticks. Through quirks of fate, a few of them end up successful. This business model is probably short lived, though, as the very spam it relies on degenerates the platforms necessary for their proliferation, so that user interest will eventually decline. Already we’re seeing sites like Deviant Art and Literotica killed off by AI spam. AI will kill off markets rather than improve them.

Already we’re seeing sites like Deviant Art and Literotica killed off by AI spam.

What's happening? Or I guess as that's pretty obvious, what is this doing to the user base and how are moderators/the sites dealing with it?

This... varies pretty heavily by area and focus. The Furry Diffusion discord has some anti-spamming measures and a general ethos focused toward quality, and as a result it's able to keep the 'floor' pretty high and higher-upvoted images are generally pretty high-quality too. They're not all good, and even the greats aren't perfect, but the degree of intentionality that can be brought forward is far greater than most people expect.

That depends on both moderation that may scale in the face of a genuinely infinite slop machine and relatively low stakes (and, frankly, monomania), but it's at least pointing to ways AI creators can operate outside of full spam mode.

Not being an art expert, I can’t judge those images too deeply. One thing that stands out to me though is how compositionally simple those examples are. They seem to all consist of one character in the foreground and then some kind of dramatic stylistic background. My own experiences with AI image generation is that it’s very difficult to get the prompt engine to orchestrate more than just one or two characters, so that this sort of simple approach seems like it is probably the best that current AI is capable of. To me, it doesn’t seem like a rich tool for self expression.

Human artistic successes are indicative of survivorship bias. AI just makes this more visible because the productivity is so much higher.

It also amplifies the effect through the amplified productivity. That is, you can achieve greater success with a lower mean quality, because instead of having a thousand humans write a thousand works and then pick the best one, you can write ten million AI works and then pick the best one, allowing you to select more standard deviations up. Which means that there will be literal millions of AI slop work of very low average quality just in the hope that one will rise to the top.

This makes discovery a lot harder and waste more time from pioneers reading slop in order to find the good stuff.

I’m not a ‘math wizard’, but something about this seems off. Shakespeare didn’t write one hundred plays and then choose the best few dozen to publish. He developed his playwriting ability until he was at a skill level to consistently put out good content. If AI lacks the underlying skills to create good works, then should we expect even one in a trillion of its works to be as good as Macbeth, or should we regard the creation of such a thing as physically impossible unless underlying conditions are met? It seems like it’s less a matter of statistical probability than physical possibility.

Not true. Human works that find great success usually do so based on their merits as artistic products. AI works that find success usually do so as flukes. Put out millions of AI created light novels and occasionally one of them will slip through some quality filtering service. Their success is predicated on the inability of these services to filter quality 100%, and they enjoy an advantage over the shittiest of human works in this regard based only on the scale of their output.

Human works that find great success usually do so based on their merits as artistic products

I've seen enough experiments showing successful art and music are mostly random to think this is definitely not true.

People walk through an art gallery and are asked to rate their favourite pieces. It's like an even split.

When you mix in "experts" to tell people whether the art is good or bad, the random walk disappears and everybody just agrees with them.

Put out millions of AI created light novels and occasionally one of them will slip through some quality filtering service. Their success is predicated on the inability of these services to filter quality 100%

You're describing three hundred years of the publishing industry.

excepting the bit where anyone actually uses and potentially even pays for Antigravity

Until last week, Google's AI pro antigravity plan with the annual deal was one of the cheapest ways to get Claude Opus 4.5. Unclear what the impact of their new weekly rate limit is going to be

Clair Obscur had a prestigious award revoked after the game turned out to have a handful of temporary assets that were AIgen left i

My understanding is that this was not actually a prestigious award and may in fact have been done for publicity.

Huh. Fair if true. I saw (and was familiar with) Six One Indie for a couple previous years of their showcases, but I stand to be corrected if you have more detail.

Not 100% sure on that since the source was a reddit comment but a cursory google makes it seem to be the case.

While the most famous mainstream family of LLM is the one by OpenAi, it is Anthropic which is doing the most interesting theoretical work, and producing novel ways of utilizing this technology. (Ironic, given Anthropic's unorthodox beliefs regarding AI: that censorship is partially for the benefit of the model.) Anthropic focusing its efforts less on making the best artificial friend and assistant, and more on making the best artificial assistant, has produced results expected of specialization: success. As a recent The Atlantic article shows, even normies can now utilize the computing power humanity has created.

I bet various AI companies are actively sponsoring TEGAKI because they need a clean source of new training data.

It's plausible, but TEGAKI seems... questionably competent enough that it'd be a weird bank shot. Beyond that, a lot of recent image models (Flux, Qwen Image/Edit, Nano Banana) show increasingly strong evidence of (or outright state they are) being trained heavily on synthetic data.

Yeah you bring up the point that people will get up in arms about someone else using Generative AI in ways they disagree with. But everyone will have some use of generative AI that they either partake in themselves or approve of generally.

Hence why Pro-AI forces will win in the end. An artist who is up in arms about being out-competed by Slop will probably use an LLM to complete some task they consider 'beneath' them and/or not worth paying money for someone else to do. Its just too useful across too many different tasks.

I do not think we're at the literal cusp of superintelligence... but I do think we've passed a point where the cutting edge LLMs are now smarter/more capable than the median human, even the median American in purely 'mental' tasks.

Sienna Rose

If this were the culture war of 4-8 years ago, there'd probably be multiple articles about digital blackface given that the AI product is presented as a black woman's voice and image.

Yeah, it's a hard topic, and a scary one. I was considering linking this post from tumblr:

recently my friend's comics professor told her that it's acceptable to use gen AI for script-writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sister's screenwriting professor said that they can use gen AI for concept art and visualization, but that it won't be able to generate a script that's any good. and at my job, it seems like each department says that AI can be useful in every field except the one that they know best.

It's only ever the jobs we're unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen AI will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don't.

And in some ways, it's a funny and illustrative story, and if AI freezes at exactly this state, I'd expect that we'll see a bunch of people very proud of their predictive prowess. And then it's also a funny and illustrative story, because 'can compete with you for every skill but your one or two specific areas of focus' describes the entire process of employing skilled labor everywhere.

recently my friend's comics professor told her that it's acceptable to use gen AI for script-writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sister's screenwriting professor said that they can use gen AI for concept art and visualization, but that it won't be able to generate a script that's any good. and at my job, it seems like each department says that AI can be useful in every field except the one that they know best.

As a teacher, I've given basically the same guidelines to my students: in a writing class, use of AI-generated text is cheating comparable to plagiarism, but students are permitted (as long as they're upfront about it and get my approval first) to use AI for non-text creative projects.

In my case, it's not because I assume that 'jobs I'm unfamiliar with... can be replaced by automation.' It's because the class and the assignment is intended to teach writing skills, not art skills. It's because I am a good writer, and able to teach writing skills; it's because I am an abysmal artist, and not competent to teach art skills. As long as they're dedicating their full effort to the actual class material, I don't mind if they use shortcuts for peripheral tasks. It's not about my estimate of what AI and automation is capable of; it's about my estimation of what I am capable of, and what I expect my students to be capable of.