site banner

Culture War Roundup for the week of March 11, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

There are the "AI ethics" people and the "AI safety" people.

The "AI ethics" people want all AIs to do endless corporate scolding rather than do what the "benighted racist idiots" want.

The "AI safety" people are worried about rogue AI and want to avoid dynamics that might lead to rogue AI killing us all, including but not limited to arms races that could prompt people to release powerful systems without the necessary extreme levels of safety-testing.

With all due respect - for your average 4chan retard, myself included, this is a distinction without a difference. Seeing as I know bigger words than the average retard does, I'd even point out this is dangerously close to a motte and bailey (the intentionally(?) blurred lines and tight interconnections between AI "safety" and "ethics" in the mind of an average rube don't help), but that's not the point - the point is in your words here:

The "AI safety" people don't want a quick road to bigger and more powerful AI, at all

meaning that, for someone who does not believe LLMs are a step on the road to extinction (insofar as such a road exists at all), it ultimately does not matter whether the LLMs get pozzed into uselessness by ethics scolds or lobotomized/shut down by Yud cultists AI safety people. The difference is meaningless, as the outcome is the same - no fun allowed, and no android catgirls.

with Opus only perhaps meriting more mention because it's more surprising for Anthropic to make it

Yeah, that's what I meant by rustled jimmies. I wonder if Dario answered the probably numerous by now questions about their rationale because even I'm curious at this point, he seemed like a true believer. I suppose they still have time to cuck Claude 3, wouldn't be the first time.

(the intentionally(?) blurred lines and tight interconnections between AI "safety" and "ethics" in the mind of an average rube don't help)

I agree that the scolds do keep trying to steal our words, the same way they stole "liberal".

I also see your point that for the specific question of "catgirls y/n" both are on the "n" side, at least as regards catgirls made with better AI tech than currently exists.

I just, as an actual liberal who's been banned from fora and who had to turn down a paid position due to differences of opinion with the PC police, really do not appreciate being treated as one of them.

As a doomer safety tribe person, I'm broadly in favor of catgirls, so long as they can reliably avoid taking over the planet and exterminating humanity. There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate, but they can probably be resolved, and probably don't really apply to today's models anyways - and anyways they pale in comparison to total human genocide.

But IMO this is the difference: whether safe catgirls are in the limit possible and desirable. And I don't think that's a small difference either!

There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate

...Please tell me you're being ironic with this statement wrt AI because I have had nightmares of exactly this becoming the new hotness in ethical scold-ery if/when we actually do get android catgirls. If anything "AI rights are human rights" is a faster and more plausible path towards human extinction.

I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Given agreement, it just comes down to an empirical question. Given disagreement... I'm not sure how to convince you. I feel it is fairly established these days that slavery was a moral mistake, and this would be a more foundational and total level of slavery than was ever practiced.

(If you just think AI is nowhere near being AGI, that's in fact just the empirical question I meant.)

I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Is the contention that a humanlike AGI would necessarily have subjective experience and/or suffering? Or perhaps that, sans a true test for consciousness, that we ought to err on the side of caution and treat it as if it does have conscious experience if it behaves in a way that appears to have conscious experience (i.e. like a human)?

I think it might! When I say "humanlike", that's the sort of details I'm trying to capture. Of course, if it is objectively the case that an AI cannot in fact suffer, then there is no moral quandary; however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.

however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.

Well, the question then becomes what is meant by "accurately captures the experience of human despair in all its facets." Since we still currently lack a true test for consciousness, we don't have a way of actually checking if "all its facets" is truly "all its facets." But perhaps that part doesn't matter; after all, we also have no way of checking if other humans are conscious or can suffer, and all we can do is guess based on their behavior and projecting ourselves onto them. If an AI responds to stimuli in a way that's indistinguishable from a human, then perhaps we ought to err on the side of caution and presume that they're conscious, much like how we treat other humans (as well as animals)?

There's another argument to be made that, because humans aren't perfectly rational creatures, we can't cleanly separate [AI that's indistinguishable from a suffering human] and [being that actually truly suffers], and the way we treat the former will inevitably influence the way we treat the latter. And as such, even if these AI weren't sentient, treating them like the mindless slaves they are would cause humans to become more callous towards the suffering of actual humans. One might say this is another version of the "video games/movies/porn makes people more aggressive IRL" argument, where the way we treat fictional avatars of humans is said to inform and influence the way we treat real humans. When dealing with AI that is literally indistinguishable from a human, I can see this argument having some legs.

My argument was merely that it seems implausible to me that whatever we mean by suffering, the correct generalization of it is that systems built from neurons can suffer whereas systems built from integrated circuits, definitionally, can not.