site banner

Culture War Roundup for the week of February 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

15
Jump in the discussion.

No email address required.

Very much is an assumption and not "how the world works". It's an article of faith masquerading as a scientific explanation of how things ought to be.

What do you think instrumental convergence is?

I am becoming suspicious that you are spouting dismissive words without those words actually referencing any ideas.

The only conclusion about how things ought to work comes from the field of Physics for me, not "AI ethics".

Does physics not suggest that controllable energy sources are a necessary step in doing lots of different things?

No it didn't because I don't make generative AI predictions or think about them at all.

That much is clear.

What do you think instrumental convergence is?

I am becoming suspicious that you are spouting dismissive words without those words actually referencing any ideas.

I think it is what Wikipedia says it is. That part is clear enough to me I don't think it needs repetition.

Once again, I don't buy into the hypothesis. For the plain reason being that its a thought experiment over thought experiment over thought experiment. There is no empirical basis for me to believe that a paperclip maximizer will actually behave as Yudhowsky et al claim it would. There is no mechanical basis for me to believe either being such AI doesn't exist. I don't think current reinforcement learning models are a good proxy for the model Yudhowsky talks about. It's speculation at its very core.

Why you may ask. Simple answer, the same reason I don't claim to know what climate will be like 1000 years from now. There is a trend. We can extrapolate that trend. The error bars will be ungodly massive. Chaining thought experiment over thought experiment creates the same problem. Too much uncertainity. At one point it's just the assumptions doing all the talking. And I won't be losing sleep over a hypothetical.

That much is clear.

Spare me the snark/attitude if you don't want to be left on read.

You claimed before:

Everything that the doomers claim AI would do assumes a biological utility function, such as maximizing growth, reproduction, and fitness.

[Instrumental convergence] Very much is an assumption

Now you claim:

I think it is what Wikipedia says it is

But Wikipedia says it is a conclusion derived from different assumptions which Wikipedia lists. None of them have anything to do with a biological utility function. So it's pretty clear you either a) don't think it is what Wikipedia says it is or b) didn't read wikipedia or anything else about it. But for some reason you still feel the need to make dismissive statements.

Feel free to leave me "on read". It is clear you are not here to discuss this in good faith. You might want to check out a subreddit called /r/sneerclub - it may be to your liking.

Feel free to leave me "on read". It is clear you are not here to discuss this in good faith. You might want to check out a subreddit called /r/sneerclub - it may be to your liking.

If you can't engage civilly, do not engage.

If you read Wikipedia you would figure out that "instrumental convergence" is not the same thing as the "biological utility" function I described. Quite rich of you to claim I didn't read up and am hinting at words without knowing the ideas. Your confusion as to them being the same thing or even in the same ballpark is honestly hilarious.

FYI - Instrumental convergence describes the convergence to a biological utility function (different primary goal converges to same sub goal, often biological for "intelligent" agents). I was talking about the utility function, not the convergence itself.

Once again I don't know why are you acting like AI doomerism is the word of God. I can read, comprehend, and understand every single thing in the Wikipedia page and still not agree with it. I gave you my reason [compounding uncertainty built on a house of assumptions] clear as day.

It is clear you are not here to discuss this in good faith. You might want to check out a subreddit called /r/sneerclub - it may be to your liking.

I have 600+ comments on the motte. You are the one who made it personal and is giving more snark than substance. Piss off.

I have 600+ comments on the motte. You are the one who made it personal and is giving more snark than substance. Piss off.

If you can't engage civilly, do not engage.