This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
This is a great comment and I thank you for it.
Let's be specific about three things, however; 1. LLMs/AI as a broad field. 2. Specific models 3. The commercial marketing of those model.
LLMs /AI -- Go for it. As something close to a free speech absolutist, I want progress in all directions on this front at this level.
Specific models. Go for it, again. I don't believe there is such a thing as an inherently "evil" model besides some embarassingly obvious ones (i.e. one trained on pictures of cheese pizza - that's an internet euphemism for the most very bad thing, btw). I have no inherent issue with even "produce marketing slip only!" models. This is where I think your comment operates at -- yes, generativeAI that could make a Shawshank level film would be excellent!
The commercial marketing. This is the level at which I am raging. Not because I don't want to see more AI-slop. I can already do that, I just turn off my computer monitor and phone. I rage because you have OpenAI, which has tens of billions of dollars to burn, sprinting towards the lowest common denominator use for gen-AI that's made even worse by the fact that it's attempting to replicate the attention capture model of social media. They could be putting infinite Dostoevsky in your pocket but they actively are choosing not to. That's the contemptible feature for me. Like my previous comment stated, even Google is going "hey, maybe let's try to make dense textbooks more accessible?" You can draw a straight line path from that to "I want to read Dostoevsky, but I find it hard, hey RussianNovelistGPT, can you explain Roskolnikov to me?"
But, again, the median appetite seems to be a re-hash of attention economy capture processes. Anthropic I am more optimistic about because they seem to be doubling down on using Claude to build agents and to make coding open to people who don't code. But I also worry that will turn into a bunch of MBA types re-building their own shitty versions of SalesForce and pitching it to their boss as "one man AI project to synergize all of the KPIs!"
This is some perfect world thinking, but I want to see the $100 bn of AI spend go to a company that's trying to develop new materials to help humanity economically escape the gravity well (and, no, this is no Elon an xAI). Or some AI company that actually has a non-vaporware approach to analyzing the big diseases that are responsible for the most suffering and death on earth. I'll stop here before I actually veer into "why can't all the good things be!" territory. My point remains; we're selling out early on AI because the charlatans by the bay captured a bunch of money and are re-plowing it into their business models from the 2000s and 2010s. We could be sprinting towards so much more.
More options
Context Copy link