This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Award-Winning AIs
To be fair, "best isekai light novel" is somewhere between 'overly narrow superlative' and 'damning with faint praise', and it's not clear exactly where how predominately AI-generated the writing is or what procedure the human involved used. My own experience has suggested that extant LLMs don't scale well to full short stories without constant direction every 600-1k words, but that is still a lot faster than writing outright, and there are plausible meta-prompt approaches that people have used with some success for coherence, if not necessarily for quality.
Well, that's just the slop-optimizing machine winning in a slop competition.
It's a slightly higher standard than isekai (or country music), and Spotify is a much broader survey mechanism than Random Anime House, and a little easier to check for native English speakers. My tastes in music are...
badunusual, but the aigen seems... fine? Not amazing, by any means, and some artifacts, but neither does it seem certain that the billboard number is just bot activity.Well, that's not the professional use!
It's... hard to tell how much of this is an embarrassing truth specific to Studio Larian, or if it's just the first time someone said it out loud (and Larian did later claim to roll back some of it). Clair Obscur had a prestigious award revoked after the game turned out to have a handful of temporary assets that were AIgen left in a before-release-patch build. ARC Raiders uses a text-to-speech voice cloning tool for adaptive voice lines. But a studio known for its rich atmospheric character and setting art doing a thing is still a data point.
(and pointedly anti-AI artists have gotten to struggle with it and said they'd draw the line here or there. We'll see if that lasts.)
And that seems like just the start?
It's easy to train a LORA to insert your character or characters into parts of a scene, to draw a layout and consider how light would work, or to munge composition until it points characters the right way. StableDiffusion's initial release came with a bunch of oft-ignored helpers for classically extremely tedious problems like making a texture support seamless tiling. Diffusion-based upscaling would be hard to detect even with access to raw injest files. And, of course, DLSS is increasingly standard for AAA and even A-sized games, and it's gotten good enough that people are complaining that it's good. At the more experimental side, tools like TRELLIS and Hunyuan3D are now able to turn an image (or more reasonable, set of images) into a 3d model, and there's a small industry of specialized auto-rigging tools that theoretically could bring a set of images into a fully-featured video game character.
I don't know Blender enough to judge the outputs (except to say TRELLIS tends to give really holey models). A domain expert like @FCfromSSC might be able to give more light on this topic than I can.
Well, that's not the expert use!
That's a pretty standard git comment, these days, excepting the bit where anyone actually uses and potentially even pays for Antigravity. What's noteworthy is the user tag:
Assuming Torvalds hasn't been paid to advertise, that's a bit of a feather in the cap for AI codegen. The man is notoriously picky about code quality, even for small personal projects, and from a quick read-through (as an admitted python-anti-fan) that seems present here. That's a long way from being useful in a 'real' codebase, nor augmenting his skills in an area he knows well, nor duplicating his skills without his presence, but if you asked me whether I'd prefer to be recognized by a Japanese light novel award, Spotify's Top 50, or Linus Torvalds, I know which one I'd take.
My guesses for how quickly this stuff will progress haven't done great, but anyone got an over:under until a predominately-AI human-review-only commit makes it into the Linux kernel?
Well, that's just trivial stuff!
I don't understand these questions. I don't understand the extent that I don't understand these questions. I'm guessing that some of the publicity is overstated, but I may not be able to evaluate even that. By their own assessment, the advocates of AI-solving Erdős problems people admit:
So it may not even matter. There are a number of red circles, representing failures, and even some green circles of 'success' come with the caveat that the problem was already-solved or even already-solved in a suspiciously similar manner.
Still a lot
smarter aboutbetter at it than I am.Okay, that's the culture. Where's the war?
TEGAKI is a small Japanese art upload site, recently opened to (and then immediately overwhelmed by) widespread applause. Its main offerings are pretty clear:
That's a reasonable and useful service, and if they can manage to pull if off at scale - admittedly a difficult task they don't seem to be solving very well given the current 'maintenance' has a completion estimate of gfl - I could see it taking off. If it doesn't, it describes probably the only plausible (if still imperfect) approach to distinguish AI and human artwork, as AI models are increasingly breaking through limits that gave them their obvious 'tells', and workflows like ControlNet or long inpainting work have made once-unimaginably-complex descriptions now readily available.
That's not the punchline. This is the punchline:
@Porean asked "To which tribe shall the gift of AI fall?" and that was an interesting question a whole (/checks notes/) three years ago. Today, the answer is a bit of a 'mu': the different tribes might rally around flags of "AI" and "anti-AI", but that's not actually going to tell you whether they're using it, nevermind if those uses are beneficial.
In September 2014, XKCD proposed that an algorithm to identify whether a picture contains a bird would take a team of researchers five years. YOLO made that available on a single desktop by 2018, in the sense that I could and did implement training from scratch, personally. A decade after XKCD 1425, you can buy equipment running (heavily stripped-down) equivalents or alternative approaches off the shelf default-on; your cell phone probably does it on someone's server unless you turn cloud functionality it off, and might even then. People who loathe image diffusers love auto-caption assistance that's based around CLIP. Google's default search tool puts an LLM output at the top, and while it was rightfully derided for nearly as year as terrible llama-level output, it's actually gotten good enough in recent months I've started to see anti-AI people use it.
This post used AI translation, because that's default-on for Twitter. I haven't thrown it to ChatGPT or Grok to check whether it's readable or has a coherent theme. Dunno whether it would match my intended them better, or worse, to do so.
I wanted to write a post about some of these events, specifically the change in attitude for the titans of industry like Linus Torvalds and Terence Tao. I'm no programmer, but I like to peer over their shoulders, I know enough to find profoundly disorienting, seeing the creator of Linux, a man whose reputation for code quality involves tearing strips off people for minor whitespace violations, admit to vibe-coding with an LLM.
Torvalds and Tao are as close to gods as you can get in their respective fields. If they're deriving clear utility from using AI in their spheres, then anyone who claims that the tools are useless really ought to acknowledge the severe Skill Issue on display. It's one thing for a concept artist on Twitter to complain about the soul of art. It is quite another for a Fields Medalist to shrug and say, "Actually, this machine is helpful."
Fortunately, people who actually claim that LLMs are entirely useless are becoming rare these days. The goalposts have shifted with such velocity that they've undergone a redshift. We've moved rapidly from "it can't do the thing" to "it does the thing, but it's derivative slop" to "it does the thing expertly, but it uses too much water." The detractors have been more than replaced by those who latch onto both actual issues (electricity use, at least until the grid expands) and utter non-issues to justify their aesthetic distaste.
But I'm tired, boss.
I'm sick of winning, or at least of being right. There's little satisfaction to be had about predicting the sharks in the water when I'm treading that same water with the rest of you. I look at the examples in the OP, like the cancelled light novel or the fake pop star, and I don't see a resistance holding the line. I see a series of retreating actions. Not even particularly dignified ones.
Ah, the irony of me being about to misattribute this quote to Gandhi, only to be corrected by the dumb bot Google uses for search results. And AI supposedly spreads misinformation. It turns out that the "stochastic parrot" is sometimes better at fact-checking than the human memory.
Unfortunately, having a lower Brier score, while good for the ego, doesn't significantly ameliorate my anxiety regarding my own job, career, and general future. Predicting the avalanche doesn't stop the snow. And who knows, maybe things will plateau at a level that is somehow not catastrophic for human employability or control over the future. We might well be approaching the former today, and certain fields are fucked already. Just ask the translators, or the concept artists at Larian who are now "polishing" placeholder assets that never quite get replaced (and some of the bigger companies, like Activision, use AI wherever they can get away, and don't seem to particularly give a fuck when caught out). Unfortunately, wishing my detractors were correct isn't the same as making them correct. Their track record is worse than mine.
The TEGAKI example is... chef's kiss. Behold! I present a site dedicated to "Hand-drawn only," a digital fortress for the human spirit, explicitly banning generative AI. And how is this fortress built? With Cursor, Claude, and CodeRabbit.
(Everyone wants to automate every job that's not their own, and perhaps even that if nobody else notices. Guess what, chucklefuck? Everyone else feels the same, and that includes your boss.)
To the question "To which tribe shall the gift of AI fall?", the answer is "Mu." The tribes may rally around flags of "AI" and "Anti-AI," but that doesn't actually tell you whether they're using it. It only tells you whether they admit it. We're in a situation where the anti-AI platform is built by AI, presumably because the human developers wanted to save time so they could build their anti-AI platform faster. This is the Moloch trap in a nutshell, clamped around your nuts. You can hate the tool, but if the tool lets your competitor (or your own development team) move twice as fast, you will use the tool.
We are currently in the frog-boiling phase of AI adoption. Even normies get use out of the tools, and if they happen to live under a rock, they have it shoved down their throats. It's on YouTube, it's consuming TikTok and Instagram, it's on the damn news every other day. It's in your homework, it's in the emails you receive, it's you double checking your prescription and asking ChatGPT to explain the funny magic words because your doctor (me, hypothetically) was too busy typing notes into an Epic system designed by sadists to explain the side effects of Sertraline in detail.
To the extent that it is helpful, and not misleading, to imagine the story of the world to have a genre: science fiction won. We spent decades arguing about whether strong AI was possible, whether computers could be creative, whether the Chinese Room argument held water. The universe looked at our philosophical debates and dropped a several trillion parameter model on our heads.
The only question left is the sub-genre.
Are we heading for the outcome where we become solar-punks with a Dyson swarm, leveraging our new alien intelligences to fix the climate and solve the Riemann Hypothesis? Or are we barrelling toward a cyberpunk dystopia with a Dyson swarm, where the rich have Omni-sapients in their pockets while the rest of us scrape by in the ruins of the creative economy, generating training data for a credit? Or perhaps we are the lucky denizens of a Fully Automated Luxury Space Commune with optional homosexuality (but mandatory Dyson swarms)?
(I've left out the very real possibility of human extinction. Don't worry, the swarm didn't go anywhere.)
The TEGAKI example suggests the middle path is most likely, at least for a few years (and the "middle" would have been ridiculous scifi a decade back). A world where we loudly proclaim our purity while quietly outsourcing the heavy lifting to the machine. We'll ban AI art while using AI to build the ban-hammer. We'll mock the "slop" while reading AI summaries of the news. We'll claim superiority over the machine right up until the moment it politely corrects our Gandhi quotes and writes the Linux kernel better than we can.
I used to think my willingness to embrace these tools gave me an edge, a way to stay ahead of the curve. Now I suspect it just means I'll be the first one to realize when the curve has become a vertical wall.
Yeah. There's a mirror to the post here I wrote on tumblr, steelmanning the concerns with AI use for that audience, and it's... concerning how many are already applicable even if zero further progress with AI features and capabilities happen.
(And, uh, that I used AI-assisted doctors as an example, albeit of the form "Amazon-East goes down and suddenly your doctor doesn’t know how to check the side effects for a specific medicine.")
Very much agreed. It's probably useful to notice where the criticisms of AI output have moved from errors in basic functionality to lack of Vision or discernment of domain expertise, but it's not very comforting for someone without that vision themselves.
And then it becomes near-certain that progress isn't going to stop here.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link