This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Award-Winning AIs
To be fair, "best isekai light novel" is somewhere between 'overly narrow superlative' and 'damning with faint praise', and it's not clear exactly where how predominately AI-generated the writing is or what procedure the human involved used. My own experience has suggested that extant LLMs don't scale well to full short stories without constant direction every 600-1k words, but that is still a lot faster than writing outright, and there are plausible meta-prompt approaches that people have used with some success for coherence, if not necessarily for quality.
Well, that's just the slop-optimizing machine winning in a slop competition.
It's a slightly higher standard than isekai (or country music), and Spotify is a much broader survey mechanism than Random Anime House, and a little easier to check for native English speakers. My tastes in music are...
badunusual, but the aigen seems... fine? Not amazing, by any means, and some artifacts, but neither does it seem certain that the billboard number is just bot activity.Well, that's not the professional use!
It's... hard to tell how much of this is an embarrassing truth specific to Studio Larian, or if it's just the first time someone said it out loud (and Larian did later claim to roll back some of it). Clair Obscur had a prestigious award revoked after the game turned out to have a handful of temporary assets that were AIgen left in a before-release-patch build. ARC Raiders uses a text-to-speech voice cloning tool for adaptive voice lines. But a studio known for its rich atmospheric character and setting art doing a thing is still a data point.
(and pointedly anti-AI artists have gotten to struggle with it and said they'd draw the line here or there. We'll see if that lasts.)
And that seems like just the start?
It's easy to train a LORA to insert your character or characters into parts of a scene, to draw a layout and consider how light would work, or to munge composition until it points characters the right way. StableDiffusion's initial release came with a bunch of oft-ignored helpers for classically extremely tedious problems like making a texture support seamless tiling. Diffusion-based upscaling would be hard to detect even with access to raw injest files. And, of course, DLSS is increasingly standard for AAA and even A-sized games, and it's gotten good enough that people are complaining that it's good. At the more experimental side, tools like TRELLIS and Hunyuan3D are now able to turn an image (or more reasonable, set of images) into a 3d model, and there's a small industry of specialized auto-rigging tools that theoretically could bring a set of images into a fully-featured video game character.
I don't know Blender enough to judge the outputs (except to say TRELLIS tends to give really holey models). A domain expert like @FCfromSSC might be able to give more light on this topic than I can.
Well, that's not the expert use!
That's a pretty standard git comment, these days, excepting the bit where anyone actually uses and potentially even pays for Antigravity. What's noteworthy is the user tag:
Assuming Torvalds hasn't been paid to advertise, that's a bit of a feather in the cap for AI codegen. The man is notoriously picky about code quality, even for small personal projects, and from a quick read-through (as an admitted python-anti-fan) that seems present here. That's a long way from being useful in a 'real' codebase, nor augmenting his skills in an area he knows well, nor duplicating his skills without his presence, but if you asked me whether I'd prefer to be recognized by a Japanese light novel award, Spotify's Top 50, or Linus Torvalds, I know which one I'd take.
My guesses for how quickly this stuff will progress haven't done great, but anyone got an over:under until a predominately-AI human-review-only commit makes it into the Linux kernel?
Well, that's just trivial stuff!
I don't understand these questions. I don't understand the extent that I don't understand these questions. I'm guessing that some of the publicity is overstated, but I may not be able to evaluate even that. By their own assessment, the advocates of AI-solving Erdős problems people admit:
So it may not even matter. There are a number of red circles, representing failures, and even some green circles of 'success' come with the caveat that the problem was already-solved or even already-solved in a suspiciously similar manner.
Still a lot
smarter aboutbetter at it than I am.Okay, that's the culture. Where's the war?
TEGAKI is a small Japanese art upload site, recently opened to (and then immediately overwhelmed by) widespread applause. Its main offerings are pretty clear:
That's a reasonable and useful service, and if they can manage to pull if off at scale - admittedly a difficult task they don't seem to be solving very well given the current 'maintenance' has a completion estimate of gfl - I could see it taking off. If it doesn't, it describes probably the only plausible (if still imperfect) approach to distinguish AI and human artwork, as AI models are increasingly breaking through limits that gave them their obvious 'tells', and workflows like ControlNet or long inpainting work have made once-unimaginably-complex descriptions now readily available.
That's not the punchline. This is the punchline:
@Porean asked "To which tribe shall the gift of AI fall?" and that was an interesting question a whole (/checks notes/) three years ago. Today, the answer is a bit of a 'mu': the different tribes might rally around flags of "AI" and "anti-AI", but that's not actually going to tell you whether they're using it, nevermind if those uses are beneficial.
In September 2014, XKCD proposed that an algorithm to identify whether a picture contains a bird would take a team of researchers five years. YOLO made that available on a single desktop by 2018, in the sense that I could and did implement training from scratch, personally. A decade after XKCD 1425, you can buy equipment running (heavily stripped-down) equivalents or alternative approaches off the shelf default-on; your cell phone probably does it on someone's server unless you turn cloud functionality it off, and might even then. People who loathe image diffusers love auto-caption assistance that's based around CLIP. Google's default search tool puts an LLM output at the top, and while it was rightfully derided for nearly as year as terrible llama-level output, it's actually gotten good enough in recent months I've started to see anti-AI people use it.
This post used AI translation, because that's default-on for Twitter. I haven't thrown it to ChatGPT or Grok to check whether it's readable or has a coherent theme. Dunno whether it would match my intended them better, or worse, to do so.
Don't read too much into Torvalds' endorsement. The vibe coded python visualizer he talks about is a small helper tool, not the actual project. It's pretty much equivalent to using vibe coding to write scripts and such (where I don't think anyone has disputed that LLMs can be useful).
As a domain expert in that subfield I find it amusing that for all his "programming guruness", the actual meat and potatoes of that project is a combination of what you'd have learnt on a masters level DSP course in the late 70s / early 80s (including the same mistakes students typically do) combined with imitating the state of the art 50 years ago (https://en.wikipedia.org/wiki/Eventide,_Inc#H910_Harmonizer) on much better hardware. Or to put it another way, when Torvalds is taken out of his comfort zone and field of expertise, he's roughly the equivalent of a third year university student. This is why I think anyone claiming universities are useless is full of shit. "Thinking really hard" (to paraphrase Yud) won't give you the necessary theoretical underpinnings that are required to even realize what you're missing, nevermind do something useful in a whole lot of fields.
Incidentally if someone wants to make an actually decently performing pitch shifter, you could do worse than start with this paper which has a rather good explanation of the basic method as well as some significant improvements to quality compared to most earlier publications that are easy to find and read (it'll still sound warbly on polyphonic input but good sounding realtime polyphonic pitch shifting has to this date been accomplished by only three manufacturers that I know of).
That's fair, although I'll caveat that assuming university degrees are good designation of skill and that a skilled user with AI ends up "roughly the equivalent of a third year university student", that's still praising with faint damns.
That's a very interesting paper to read, I'll admit, though.
The joke is of course that DSP is typically introduced in the third year (or at least was around here). Ie. Linus (widely regarded a brilliant programmer and with a masters degree in CS) managed roughly as well as a student after an introduction to DSP course. I don't think he actually used AI for the C code, or at least it wouldn't make any sense given how there's a dearth of good training material (*) and the programming part itself is incredibly easy for any competent C or C++ programmer who doesn't need particularly optimal code (as Linus outright mentions in the repo). The trick is knowing what DSP algorithms to use and how and why the textbook ones are flawed (or outright bad in many cases).
*: There's a site called musicdsp.org which is a somewhat prominent site with pseudo- and sourcecode snippets of all sorts of audio DSP algorithms. They are also almost all from subtly flawed to horribly bad and and a layman (ie. someone who hasn't studied DSP theory) will have a hard time understanding how and where. Thus it's quite ironically almost exactly what you'd get if someone time traveled into the early 2000s and established a site specifically dedicated to poisoning future AI training on that subtopic (with a fair bit of success, I'd say given how the alternatives are either bits here and there or actual books / papers with math instead of code that can be copy pasted). Way back in the day I went from "Hmm, this looks kinda nifty" to "OMG, everyone there is a goddamn moron" in the course of just one year when I started DSP studies.
University degree by itself is no guarantee (as I found out to my pain during a couple of group work sessions when I had to do everything myself when everybody else was so incompetent) but there are many hard science / engineering fields where more or less everyone competent has a degree in a closely related subject, a maths / physics degree with significant self study or are polymaths with near genius level intellect. So in practise a university degree is a requirement to be any good at them. Programming just happens to be a very notorious exception to that (case in point, I only took a handful of programming classes in university and have been making my living for the last 25 years mostly as a C++ programmer in either DSP or embedded systems).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link