This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Shakespeare and his contemporaries had to pay an upfront cost that was significantly higher. They had to write (already a rare skill) a manuscript and had convince at least one theatre manager to read their work. This means their innate skill had to be high enough that their first (or second, or third if they were persistent) play was of sufficient quality already.
A modern ShAIkespeare can produce and publish a new play every weekend. We need Lord StrAInge's Men, a troupe of AIs that can read, review and dismiss AI slop just as quickly as it's written instead of relying on avid human readers.
An AI that can accurately identify and dismiss slop is 90% of the way towards producing quality content, since you could just build the generative AI with that skill built in (and train them on it).
Which is to say, maybe in 10 years this will be a mostly non-issue. If they reach the point where they can generate thousands of genuinely high quality and entertaining stories, I'll happily consume the content. I think "human authorship" as a background principle is overrated. It has some value, but that value is overrated in comparison to the actual inherent value of the work. The problem with slop is that it's not very good, regardless of whether it's generated by humans or AI. Once it's good then we're good.
Not if the process itself is beyond the AI to recreate.
For instance, say that a great movie like A Clockwork Orange was made in part through the theoretical understandings the main actors had developed over their lifetimes for their crafts and used to feed into their decisions of how to act and portray their characters.
Coming up with a similar quality of acting might be impossible through mere observation and mimicry of what works and what doesn't. The AI has an intuition for what sorts of things generally go together, but it doesn't use, among other things, underlying theoretical know-how to construct its outputs.
My current assessment is that there's a low ceiling for how far AI 'thinking' can take the quality of its output, particularly regarding the complexity of what it's attempting to do. Projects that require a layered approach of various theories and techniques seem like they're fundamentally beyond AI. The more systems that need to come together to create a work, the more exponentially difficult it becomes for a pattern-finder to match its quality. The pattern-finder needs to become capable of wielding tools, systems, theories in its thinking in order to up its game past a certain point.
I've heard people say before, in the context of AI art, that humans are essentially just 'pattern finders', too, and so are creatively indistinguishable from AI. But I think this is wrong: it ignores external tools humans use to structure and create their work, such as theories and techniques, which cumulatively take the load off of them having to conceive everything in a fit of genius. I think this is the primary reason AI, despite its 'brilliance' as a search engine or generalist explainer, is so lacking in certain other regards. It's due to the total reliance of its 'cognition' on what, compared to humans, would be more like a single sub-process.
Why would you think this? Every year it gets better at this sort of thing. Clearly, it is beyond the level of current AI, but I don't see how you make the leap to "fundamentally beyond" when this seems like exactly the sort of thing that you could do by explicitly layering various theories and techniques together. Maybe you have 20 different sub-AI each of which is an expert in one theory and technique and then you amalgamate them together into one mega AI that can use all of those techniques (with some central core that synthesizes all of the ideas together). I don't know that that's definitely possible, but I can't see any evidence that it's "fundamentally" beyond AI just because they can't do it now. A couple years ago AI couldn't figure out prepositions like putting a cat on top of a horse vs putting a tattoo of a cat on a horse and people said that was "fundamentally beyond AI" because they've never encountered the real world and don't understand how things interact, but now they can usually do that. Because they got better.
More options
Context Copy link
More options
Context Copy link
Yeah, once it's good then it's good. The problem with AI content is that it can be produced by people that have no taste. Kinkade's success was one of a kind, but now anyone can create an equally terrible picture that has all the signs of expert craftsmanship. And since there are lots of people that have no taste (or else Kinkade would've died a much poorer man), they can all make and consume terrible art.
I liked Scott's "AI art vs human art" contest and another iteration in 2026 using the same rules would be even harder, but it was rigged: we didn't get random AI art vs random human art (or even randomly sampled from high-rated examples): we got a set of pictures explicitly designed to resemble human-made art. It's like when people failed the Turing test against a chatbot that pretended it was a non-native teenager (this happened before LLMs): the fact that some bots can fool you in specific circumstances doesn't mean you can't complain about the rest of them being chipper hallucinators.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link