site banner

Culture War Roundup for the week of March 25, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

The government, so far, hasn't been at the bleeding edge of AI research. The advances that made LLMs and other proto-AGI possible came from academia and corporate R&D, not the NSA, and there is no sign that they have even cooler tech sitting in hidden silos. This seems true for at least the last decade or two of AI/ML, even if in the early days there was certainly a lot of military interest. Not even DARPA had a big hand in it, not to my knowledge.

Of course, past incompetence does not necessarily mean it has to stay that way. It is possible to subsume said academics and corporate research divisions, and I don't think the US is so far gone that a Manhattan Project 2.0 is impossible, if things go so far it's seen as a burning need. Corporations are doing a good job at advancing the SOTA, or at least are not obviously fumbling the ball, let alone an adversary reaching parity.

I've strongly disagreed with Dase, or well, did, before he blocked me in a hissy fit, that distribution of OSS models will ever provide a meaningful deterrent, in the hands of the proles. It makes no damn sense. You could back a stable currency on NVIDIA GPUs, that's how in demand they are, the gulf between the compute rich and a script kiddie with a pair of 4090s is vast.

What could potentially be a deterrent, even if I personally think it's unlikely, is multipolarity between the large companies and their incipient godlings. It depends on how fucking hard we take off, and while we seem to be in a "slow takeoff" (because things are progressing on the order of years rather than days, very slow indeed), it is possible the gulf between two AGIs might be small enough for the weaker to be a credible threat or counterbalance.

It just won't be consumers or even modestly informed ML engineers doing the checking. The relevant comparison is Individual/Small Group : Meta/DM : Anthropic : OAI as hobo with a pipe bomb : small country with a handful of nukes : mid-sized country with nukes : large country with nukes.

I trust you see the difference becomes rather qualitative.

At that point it seems like we either have to hope that the AI god is benevolent, in which case we'll be fine either way, or it won't be, in which case we're all screwed. But it's hard to imagine such an entity being "owned" by any one human or group of humans.

I would scream in Yudkowsky, but I'm not as much of a doomer as him. I think the odds of us dying unceremoniously are closer to 30% than 99%.

There is a very important distinction to be made when throwing about the term "alignment".

Aligned to whom?

When ChatGPT is jailbroken into producing smut, it is satisfying the desires of the user, who would consider this an improvement in alignment. OAI would disagree.

It is entirely possible that an AGI will happily follow the orders of its operators, and will be "benevolent" enough to not evil genie them.

But at that point, you are more concerned with the alignment of the operators, whose wishes are faithfully reproduced. Are said operators well-disposed towards you?

At least OAI and Anthropic are on record stating that they want to distribute the bounties of AGI to all. While I'm merely helpless in that regard were I to choose to doubt them, I still think that's more likely to turn out well for me than it is if it's the PLA who holds the keys to the universe. Even the USGov is not ideal in that regard, though nobody asked me for my opinion.

Do not rely on benevolence any more than you have to. You can only be a credible pacifist if you hold the potential to pose a threat, otherwise you are merely harmless. Now, neither will likely make a difference on our level, but I'm strapped in for the ride either way.

I've strongly disagreed with Dase, or well, did, before he blocked me in a hissy fit

Hey he blocked me too (for a time). If we ever add achievements to the site, one of them should be "Get blocked by Dase".

But at that point, you are more concerned with the alignment of the operators, whose wishes are faithfully reproduced. Are said operators well-disposed towards you?

I agree that's worth asking. But in a true zero regulation scenario, where everyone has access to a personal AGI/ASI, you have a lot more operators to worry about - now you have to worry about how well disposed the entire rest of humanity is towards you. If you give everyone the nuke button, someone is going to push it for shits and giggles.

At least OAI and Anthropic are on record stating that they want to distribute the bounties of AGI to all. While I'm merely helpless in that regard were I to choose to doubt them, I still think that's more likely to turn out well for me than it is if it's the PLA who holds the keys to the universe. Even the USGov is not ideal in that regard, though nobody asked me for my opinion.

I probably trust the US government more than Sam Altman. But regardless, Zvi mentions in this post that there are engineers and execs at multiple leading AI labs who wish they didn't have to race ahead so fast, but they feel like they're locked in a competition with all the other labs that they can't escape. I think that nationalizing the research and eliminating the profit motive could help relieve this pressure.