site banner

Culture War Roundup for the week of March 25, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Sigh, slow news week.


Dexter and why meta-contrarians suck.

Dexter was a show about a serial-killer that aired on Showtime. It was pretty good, especially the early seasons. The premise, for those of you who don't know, is that Dexter was a "good" serial killer who only killed other killers.

If killers are bad, then Dexter was good because he reduced the number of killers.

You know who would really suck? A meta-Dexter who only killed Dexters.

... and that's how I see meta-contrarians.

"Let a thousand flowers bloom", the contrarians say, considering all sorts of weird and different ideas. "Actually, the rose is already the best flower and you smell bad" says the meta-contrarian, smugly.

Who are these meta-contrarians you ask? They are mustachioed hipsters of the rationalist community. They might dabble in some forbidden thoughts, but they don't take them seriously. Because, after all, the default hypthosis is usually the correct one.

And, yes, the default hypthosis usually is correct. But contrarians serve a valuable purpose, even if they are wrong more often than not! Because not EVERY default hypothesis is correct. And without contrarians we'll never find out which ones are wrong.

So I think it's important to give contrarians a lot MORE grace than people who espouse the default opinion. Meta-contrarians give them LESS grace. And that's why they suck.

Who are these meta-contrarians you ask? They are mustachioed hipsters of the rationalist community.

I've never heard of this before. Do you have any examples?

My biggest beef is with the people who want to police AI "doomerism".

Big Yud is probably wrong about AI. But I think his ideas are valuable, much more so than the army of normies who own $NVDA stock and think AI is neato keen, isn't science fucking awesome?

Contrarians are society's immune system and should be respected as such. Sometimes they attack healthy tissue, but we're so much better off with them then without.

people who want to police AI "doomerism"

Is that like the effective accelerationists? That would probably make a more interesting top level post. As far as I can tell, not following the issue closely, they are not “meta contrarians,” but simply a different group, with perceived interests opposed to Yud. Being aware of someone and not believing them doesn’t make a position meta. Are the people down thread making fun of the carpet moth effective altruist somehow meta for thinking she took things too far and made a fool of herself?

Here's my model.

  1. Normie opinion. AI is great because it will create jobs or something. Sam Altman actually tweeted this, which proves that he's just playing the game: "building massive-scale ai infrastructure, and a resilient supply chain, is crucial to economic competitiveness." So ask yourself. What would a 70-year old senator think. This is the normie opinion.

  2. Contrarian. Actually, maybe we shouldn't build AGI if we want to survive as a species.

  3. Meta-contrarian: Lol, don't you know nothing ever happens. Contrarians are always wrong. Let's listen to the normies.

What bugs me about AI anti-doomers is that they don’t realize how much even a non-sentient mid level AI could wreck society. In two years, digital animators are all going to be obsolete. If someone brings to market an AI that can write emails and push paper reasonably well, there goes 60 percent of white collar jobs. Couple that with a halfway decent Tesla android, there goes 40 percent of working class jobs. Making half of the people in the job market unemployed in the span of five years would cause major, major political social and economic problems. And I doubt our corporate overlords are going to respond to suddenly having 3 billion new useless eaters by going “UBI for everyone!” Hell, even a Skynet apocalypse scenario doesn’t require a God-like AGI it just requires a reasonably smart non sentient system with basic self preservation instincts and access to armaments. And that’s not even getting into the trouble that human actors could cause with good-but-not great AI systems.

If someone brings to market an AI that can write emails and push paper reasonably well, there goes 60 percent of white collar jobs.

Didn't that happen? I feel like that sort of "office drone, paper-pusher" job has become rather rare, thanks to better IT and management in general. I vaguely remember a time in the 90s when you could still get a job just because you knew how to type and use MS Office software. Now you wouldn't even put that on your resume, it's just taken for granted that any college graduate can do that, and you need some other specialized skill to get in the door. (Or be friends with the hiring manager, or be a diversity hire, or something like that)

I would classify myself as an AI anti-doomer. I think I recognize all the things you're pointing out, and maybe a few you haven't thought of. The question is, do the proponents of AI Doom offer a plausible path forward around these problems? It seems obvious to me that they do not, so what's the point of listening to them, rather than buying a few more poverty ponies and generally buckling up for the crash?

The thing that makes the path forward plausible is people acknowledging the problem and contributing to the solution, just like any other problem that requires group action.

I don't think you actually live your life this way. You're just choosing to do so in this case because it's more convenient / for the vibes.

Think of every disaster in history that was predicted. "We could prevent this disaster with group action, but I'm only an individual and not a group so I'm just going to relax." Is that really your outlook?

If there was an invading army coming in 5 years that could be beaten with group action or else we would all die, with nowhere to flee to, would you just relax for 5 years and then die? Even while watching others working on a defense? Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort? Is the word 'altruism' such a turn-off to you? How about "honor" or "pride" or "loyalty to one's people"? How about "cowardice" or "weakling"? Do these words shift anything for you, regarding the vibes?

Edit: I'm not trying to be insulting, just trying to call attention to the nature of how vibes work.

People do pro-social things not just because of the fear of punishment for not doing them, but because they understand that they are contributing to a commons that benefits everyone, including themselves.

For the record, it wouldn't be that hard to solve this problem, if people wanted to. Alignment is pretty hard, but just delaying the day we all die indefinitely with a monitoring regime wouldn't be that hard, and it would have other benefits, chiefly extending the period where you get to kick back and enjoy your life.

Question: Are there any problems in history that were solved by the actions of a group of people instead of one person acting unilaterally that you think were worth solving? What would you say to someone who took the same perspective that you are taking now regarding that problem?

And the "Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort?" question is worth an answer to, I feel.

The thing that makes the path forward plausible is people acknowledging the problem and contributing to the solution, just like any other problem that requires group action.

I don't think the AI doomers have a solution, and I don't think their actions are contributing to a solution. I've seen no evidence that they're making any meaningful progress toward Alignment, and I'm fairly skeptical that "alignment" is even a coherent concept. I'm quite confident that Coherent Extrapolated Volition is nonsense. I don't agree that their efforts are productive. I don't even agree that they're better than nothing.

Think of every disaster in history that was predicted. "We could prevent this disaster with group action, but I'm only an individual and not a group so I'm just going to relax." Is that really your outlook?

I note that you haven't actually named relevant disasters. How about the Population Bomb? How about the inevitable collapse of Capitalism, to which Communism was the only possible solution? The war on poverty, the war on alcohol, the war on terror, the war on kulaks and wreckers, the war on sparrows? The large majority of disasters predicted throughout history have been mirages, and many "solutions" have straightforwardly made things worse.

It is not enough to predict dire outcomes. The problem you're trying to solve needs to be real, and the solution you're implementing needs to have evidence that it actually works. The AI doomers don't have that, and worse, the methods they're looking for, stuff like CVE and "pivotal acts" are either fallacious or actively dangerous. The whole edifice is built on Utilitarianism run amok, on extrapolation and inference, and specifically formulated to be as resistant to skepticism as possible.

In any case, I'm not thinking as an individual. I am explicitly thinking as part of a group. It's just not your group.

If there was an invading army coming in 5 years that could be beaten with group action or else we would all die, with nowhere to flee to, would you just relax for 5 years and then die?

Hell no. But who's "we", kemosabe?

Even while watching others working on a defense? Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort?

The "defense" appears to involve implementing totalitarian systems of control, a digital tyranny that is inescapable and unaccountable, with the doomers and their PMC patrons on top. This is necessary to prevent an entirely hypothetical problem so disastrous that we can't afford to ask for empirical verification that it exists. Also, we shouldn't actually expect verifiable progress or results, because it probably won't work anyway so any failure is what we've already been conditioned to expect. Meanwhile, the tyranny part works just fine, and is being implemented as we speak.

No thanks.

Is the word 'altruism' such a turn-off to you? How about "honor" or "pride" or "loyalty to one's people"? How about "cowardice" or "weakling"? Do these words shift anything for you, regarding the vibes?

I doubt that we share a common understanding of what these words mean, or what they imply. They do not shift "the vibe", because they have no leg to stand on. I don't believe in the hell you're peddling, so its horrors do not motivate me.

Question: Are there any problems in history that were solved by the actions of a group of people instead of one person acting unilaterally that you think were worth solving?

Sure, many of them. The Civil War seems like a reasonable example. But such problems are a minority of the percieved problems actually demanding group action.

And the "Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort?" question is worth an answer to, I feel.

With no actual evidence that the problem exists, and no evidence that, if it does, they're actually contributing to a solution, it seems to me that the appropriate sacrifice is approximately zero.

So we have two questions, and we should probably focus on one.

  1. Is the problem real?
  2. Is there a way to contribute to a solution?

Let's focus on 1.

https://www.astralcodexten.com/p/the-phrase-no-evidence-is-a-red-flag

What do you mean "no actual evidence that the problem exists"? Do you think AI is going to get smarter and smarter and then stop before it gets dangerous?

"Suppose we get to the point where there’s an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?" (Yudkowsky)

Are you not familiar with the reasons people think this will happen? Are you familiar, but think the "base rate argument" against is overwhelming? I'm not saying the burden of proof falls on you or anything, I'm just trying to get a sense from where your position comes from. Is it just base rate outside view stuff?

What do you mean "no actual evidence that the problem exists"? Do you think AI is going to get smarter and smarter and then stop before it gets dangerous?

It seems to me that there's three main variables in the standard AI arguments:

  • how quickly can iterative self-improvement add intelligence to a virtual agent? usually this is described as "hard takeoff" or "soft takeoff", to which one might add "no takeoff" as a third option.
  • how does agency scale with intelligence? This is usually captured in the AI-boxing arguments, and generally the question of how much real-world power intelligence allows you to secure.
  • what does the tech-capability curve look like? This is addressed in arguments over whether the AI could generate superplagues, or self-propagating nanomachines that instantly kill everyone in the world in the same second, etc.

On all three of these points, we have little to no empirical evidence and so our axioms are straightforwardly dispositive. If you believe intelligence can be recursively scaled in an exponential fashion, that agency scales with intelligence in an exponential fashion, and that the tech-capability curve likewise scales in an exponential fashion, then AI is an existential threat. If you believe the opposite of these three axioms, then it is not. Neither answer appears to have a better claim to validity than the other.

My axioms are that all three seem likely to hit diminishing returns fairly quickly, because that is the pattern I observe in the operation of actual intelligence in the real world. Specifically, I note the many, many times that people have badly overestimated the impact of these three variables when it comes to human intelligence, badly overestimating the degree of control over outcomes that can be achieved through rational coordination and control of large, chaotic systems, as well as the revolutions that tech improvements can provide. Maybe this time will be different... or maybe it won't. Certainly it has not been proven that it will be different, nor even has the argument been strongly supported through empirical tests. I'm quite open to the idea that I could be wrong, but failing some empirical demonstration, the question then moves to "what can we do about it."

More comments

But is Big Yud a contrarian? His themes on AI being a danger certainly tend to, broadly speaking, poll well.

I don't think that "meta-contrarian" is a thing, really. Being a contrarian is more about being a type of a person than having a particular position. A contrarian will drift from a crowd to crowd when his contrarianism starts getting on a previous crowd's nerves, but his personality causes him to almost immediately start contrarian-ing towards that new crowd as well, leading to that crowd, getting annoyed too. What seems like "meta-contrarianism" is, then, a contrarian having found a new group (of people who are, probably falsely, called contrarians, even though they just have a minority position on some issue) and then starting his contrarian thing.

Someone like Michael Tracey really seems like the contrarian type. When I encountered Tracey on Twitter, he was in the process of splitting from the left due to his posting on how the Floyd drama had led to an increase in crime and urban decay, putting him adjacent to a more rightwards position; it was then possible to observe how Tracey himself noted this and started bashing the right as well. When I read Eduard Limonov's biography, he seemed to have similar tendencies, first becoming a Soviet dissident and then getting annoyed with the dissident "in-crowd" and becoming a Stalin appreciator to get on their nerves.

Your point would be better if Yud was a prophet in the wilderness, but instead, he's an influential idiot who has influence in the development of LLMs (and whatever AGIs emerge from their development.) It would be like having a board member on Intel who wants to make their chips hotter and slower. He's past the point of contrarianism: he's a Yuddite.

who has influence in the development of LLMs

Not really?

I'm pretty sure if Yudkowsky was king then GPT-4 never would have went public. He was already concerned about GPT-4 level models being a potential danger.

Isn't this point basically just "yes you should be able to have contrarian views, but only when they're completely ignorable and useless." If the Opposition can't actually do anything, then there's really no point in having them. I understand if you just think the Anti-AI position is dumb, but your argument seems like a general argument against opposition.

I just wanted to make the distinction that being a contrarian is merely being against the prevailing wisdom. It doesn't imply action, only disagreement.