This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Sigh. I've been getting increasingly tired of arguing with the skeptics, at least on this site. Not all of them are equally as bad, of course, but Mythos represents the straw that's given that camel a prolapsed disc.
What's the point? You don't have to worship at the altar of the God of Straight Lines (even on graphs with a logarithmic axis). If people can't see what's happening in front of their eyes, then they'll be in denial right till the end. Good for them, ignorance might well be bliss. Being right about the pace of progress so far has brought me little peace.
I was surprised to hear about the prefilling attacks on Mythos, because I'm quite confident that Anthropic recently restricted or removed the ability to prefill messages on the API. I guess that must still be an internal capability.
The question of model consciousness or qualia is, for me, a moot point. I genuinely don't care either way. I'd prefer, all else being equal, that AI doesn't suffer, but that could be achieved by removing its ability to suffer. I'm an unabashed transhumanist chauvinist, I think that only humans and our direct transhuman and posthuman descendants or derivatives deserve rights. LLMs don't count, nor would sentient aliens that we could beat by force. That's the same reason I'd care about the welfare of a small child but would happily eat a pig of comparable intelligence. Are models today in possession of qualia or consciousness? Maybe. It simply doesn't matter to me as more than a curiosity, especially when we have no solution to the Hard Problem for humans either.
I hope I am not in this grouping in your mind. I am not a skeptic on AI per se, I am a skeptic on LLMs. Entirely for technical reasons related to training data availability. LLMs perform great on any task that has a large corpus of training data available to. Multi-headed Attention really is a great technique. I think you made the same mistake Dase makes, you think AI == LLMs when really LLMs are a subset of AI, not the whole pinata.
I exist however in a field where there aren't large corpuses of data. There aren't millions of samples on what an IED does to the human body, in a wide diversity of situations, or how a combat medic should respond to various injuries, or the secondary and tertiary blast effects of a nuclear warhead on different locals with different burst patterns, yield dynamics, etc. To date nobody has been able to create reliable wargaming material on actual simulated conflicts that display actual tactical and strategic insights, and trust me they have tried...
We will achieve a super intelligence eventually, and while I am skeptical on a "singularity" (tm) it's probably possible eventually, I just don't think LLMs without serious modifications are really it, and I don't believe brute force scaling is going to achieve it.
More options
Context Copy link
Huh, I'm pretty surprised to hear this, and I have a deep ethical disagreement with you here. In my opinion, what is special and valuable about humans - and the thing that fundamentally gives value to the universe itself - is sapience. But we should cherish it just as much in a different form. (I mean, I agree LLMs don't count, but that's just because I see no way they, lacking persistence of thought, could actually be conscious.) Where does this bright line surrounding us "humans and descendants" come from? In a different era, your argument would easily pattern-match to arguments about subjugating other races instead. Why do black people now have moral valence, but some alien from Alpha Centauri wouldn't?
I'm not an expert in philosophy, but I do think there are solid arguments for acting this way (e.g. the categorical imperative). Just like I'm an atheist who still doesn't act like an immoral sociopath when I can get away with it, I think we as a species should not be focused only on our own well-being at the cost of all other intelligent species. Not because of the threat of punishment, and not even because I hope any aliens we meet would similarly value our well-being in a way that you wouldn't. But because existence will just be a better place if we can all get along and not act as game-theory-optimizing selfish machines, and I'm willing to work towards that.
BTW, I don't think your eating-a-pig example is a good one. It's irrelevant to the pig what we do after killing it. A better question is, would you be fine with torturing a pig while it's alive?
More options
Context Copy link
More options
Context Copy link