site banner

Culture War Roundup for the week of November 28, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

if there's even a 5% chance of AI x-risk it's worth expending a lot of energy on" which I disagree with. It's not very rigorous but I'd say that if the danger is less than ~30% I'm not that worried about it.

As far as I'm concerned, the value of mitigating a 5% existential risk from AGI is worth precisely 5% of what I'd be willing to spend to prevent a 100% risk of lethal AGI.

So about 5%x(All the money in the world). That's a pretty huge number!

I don't know why you assign a nonlinear function such that 30% risk would be disproportionately higher, but I'm genuinely unable to think of a good one myself.

I think it's that it's not at all clear how much easier, and certainly not clear if it's so easy that it would enable something like an "intelligence explosion."

Well, nobody knows that with any level of certainty approaching what we might assign to our understanding of say, mathematical theorems, or even just the plain old laws of physics. But that's where the smart money is as far as I'm concerned.

And even without an intelligence explosion, I believe that even a modest intelligence advantage in absolute terms has disproportionately high effective impact. I would find a hostile human being with 40 more IQ points than me to be a formidable opponent, let alone one that isn't biologically constrained!

Just consider a graph of lifetime earnings versus IQ to be illustrative, and to the extent that money is kinda sorta equivalent to power, I'm not betting against the AGI.

In other words, even something as 'tame' as AGI with 160 IQ scares the shit out of me, given the ease of self replication, coordination advantages it has over meat humans etc. No need for galaxy brained ones to be a fatal risk.

(Not even going into the risk of sub or roughly human level AGI that might leverage speed intelligence to be killer)

As far as I'm concerned, the value of mitigating a 5% existential risk from AGI is worth precisely 5% of what I'd be willing to spend to prevent a 100% risk of lethal AGI.

So about 5%x(All the money in the world). That's a pretty huge number!

Saying "the low probability doesn't matter because such a large amount of damage has to be prevented" is a rephrasing of Pascal's Mugging.

No. Pascal's Mugging is concerned with very low probabilities, verging on infinitesimal.

It is very much not an argument that merely unlikely things can be dismissed without further thought.

5%

This number, though, is pure asspull.

And did anyone else claim otherwise? The person who used it was simply using it as an example of the threshold at which he stopped caring.

I would find a hostile human being with 40 more IQ points than me to be a formidable opponent, let alone one that isn't biologically constrained!

All else equal?

The AI would be a much greater threat given access to the same resources, but I really wouldn't fuck with a motivated, hateful human genius myself.

This sneaks in the implication that a person with 160+ IQ who randomly hates your guts to the extent they dedicate themselves to ruin your life would actually exist.

No it doesn't. I never claimed they did, merely that I would be rather worried if that was the case.

Yeah the threshold is basically just vibes.

Well, nobody knows that with any level of certainty approaching what we might assign to our understanding of say, mathematical theorems, or even just the plain old laws of physics. But that's where the smart money is as far as I'm concerned.

),

The only general intelligence currently in existence (or at least, the smartest one that we are aware of), humans, cannot bootstrap in this way. Could "human-level" intelligence do this if it was run on silicon? Maybe. But it seems difficult-to-impossible to say, and certainly difficult-to-impossible to say how easy it would be, so that it's hard for me to agree that the smart money is on intelligence explosion.

To be fair, humans are already at the end of a bootstrap sigmoid, and that was via a very long feedback loop.

This seems to touch upon my point in the parallel post, so I should reiterate that you don't need a nonlinear utility function to choose "starve MIRI of attention" as your response if the risk is 5%. You just need to expect the solution that MIRI would bring about to be worse than losing 5% of all the money in the world.

The gap from "starve MIRI of attention" to "ignore AI x-risk entirely" is then filled by believing that given that you don't like the most prominent organisation addressing AI x-risk and are a nobody, there is nothing you personally can do that would meaningfully shift the risk, and so you ought to optimise your actions conditional on the 95% scenario.

As an aside, the nonchalant optimisation over "all the money in the world" as opposed to what is at your own personal disposal seems to be pretty close to what makes the SBFs of the world spooky. Their plans all to often seem to amount to "1. get as close as possible to controlling as much of the world's capabilities as possible; 2. optimise the use of that according to my value function", casually seeking to uproot the very ancient Chesterton's fence that is the Nash equilibrium of individual mostly selfish humans mostly controlling small slices of reality to boring selfish ends, and trusting that the social welfare of the strategy profile they reason themselves into dictating - or, worse, the new and hitherto unexplored Nash equilibrium that a bunch of conflicting "altruistic" world-optimisers with different values will converge towards - will be better. (Fun result from game theory: altruism can in fact make Nash equilibria worse!)

You just need to expect the solution that MIRI would bring about to be worse than losing 5% of all the money in the world.

Fair enough. But that is probably not the reason that the person I replied to set that arbitrary threshold.

As an aside, the nonchalant optimisation over "all the money in the world" as opposed to what is at your own personal disposal seems to be pretty close to what makes the SBFs of the world spooky.

If I'm optimizing for making all the money in the world, I'm doing a piss-poor job at it. Much better for my potentially bruised ego that I hold no such aspirations myself, and that it was a rhetorical figure more than anything else. Or rather, that's the amount of money that the Powers That Be should spend on the matter.

Their plans all to often seem to amount to "1. get as close as possible to controlling as much of the world's capabilities as possible; 2. optimise the use of that according to my value function"

Which reduces to, to put it bluntly, the rather age old habit of most rich people to-

  1. Try and get richer.

  2. Do whatever the hell they like with their money.

When put that way, I can only see efforts to single out EAs as uniquely and qualitatively different to be rather unjust to say the least. Having semi-explicit utility functions isn't that big of a deal.

very ancient Chesterton's fence that is the Nash equilibrium of individual mostly selfish humans mostly controlling small slices of reality to boring selfish ends

And that looks to me like the even more ancient practise of Old Man Chesteron parceling off land with fences to sell for financial gain. Not something remotely unique to the EA community. They're not about to capture a large fraction of global wealth by means other than the same AGI they're scared of..

Fun result from game theory: altruism can in fact make Nash equilibria worse!

Good to know, but I doubt that it's the typical case that altruism makes things worse.

Fair enough. But that is probably not the reason that the person I replied to set that arbitrary threshold.

I don't know, do you think it's that uncommon? Of course we're all susceptible to typical-minding, but my expectation certainly would be that most people's revealed preferences would be pretty ruthless towards morally alien human societies - and, as an almost inevitable consequence, assign low value to the future under MIRI's machine god. Most people I know who read about it are even suitably creeped out by the Culture, which if anything presents a hopelessly rose-tinted perspective of living under the watch of "aligned"zookeepers.

If I'm optimizing for making all the money in the world, I'm doing a piss-poor job at it. Much better for my potentially bruised ego that I hold no such aspirations myself, and that it was a rhetorical figure more than anything else. Or rather, that's the amount of money that the Powers That Be should spend on the matter.

Sorry in case it came across as that, but I wasn't seeking to accuse you personally of doing that; it's just that the reflex to optimise over total wealth rather than your slice of reality even if it is just for the sake of argument struck me as a likely part of the same memescape.

Which reduces to, to put it bluntly, the rather age old habit of most rich people to-

  1. Try and get richer.
  1. Do whatever the hell they like with their money.

When put that way, I can only see efforts to single out EAs as uniquely and qualitatively different to be rather unjust to say the least. Having semi-explicit utility functions isn't that big of a deal.

I think this collapses a lot of unlike instances of "whatever the hell they like". The distinctively busybody nature of EA rich people's value function seems to make for an uncommon combination for me, though of course not an unheard of one - without the "effective" component of EA, and perhaps controlling for level of education, I'd expect altruism (and especially altruism that's untempered by deontological principles about being light-touch in your interactions with strangers) and being rich to be anticorrelated. Genuine past instances of "powerful people micromanaging strangers for their own notion of good" look like colonial abuses and Victorian workhouses to me.

They're not about to capture a large fraction of global wealth by means other than the same AGI they're scared of.

I'm inclined to analyse their control as going beyond the number in their bank accounts. The frequently pointed out around here surprising fire support for SBF in establishment media strikes me as evidence of an ongoing successful grab for ongoing indirect/memetic control of far more wealth than what is nominally their own. (Gloss: If NYT journalists like EA enough, they can probably induce Bill Gates to use his wealth in alignment with EA values too.)

Good to know, but I doubt that it's the typical case that altruism makes things worse.

Hard to quantify given that the games that are easy to analyse almost never adequately model anything more complex than online auctions, but I remember it as being more common than you'd expect.

(Checking out for the day, sorry if my responses fall off. It's been a while since I last tried top-level posting something big and controversial and the workload of following up adequately is nontrivial.)