site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

I caught this exchange after the previous thread had mostly closed, and I'd like to push back on the claim a little.

BinaryHobo:

I remember talk about just using the excess power to pump water up hill during the day and running it through turbines coming down at night.

Did anything ever come of that?

The_Nybbler:

The physical conditions necessary to make hydro storage practical aren't common.

(How do we do the fancy quotes with user, timestamp, and maybe a link? It'd be useful here.)

It's true that hydroelectric power sources, as in dams, have saturated the supply of naturally-occurring American sites. You need a river in a rocky valley, and there are only so many of those to go around, and once they're used up, it's very hard to create more of them.

What haven't been exhausted, and in fact what can be readily found or exploited, are height differentials in general. Hills, mountains, exhausted mines, deep valleys with no water supply, all offer significant height differentials, are naturally occurring, and can be readily built out into large-scale closed-loop pumped-hydro storage, with a closed reservoir at one extreme and a closed reservoir at the other, and a reversible turbine to generate potential energy in times of excess and power in times of deficit. Should those be exhausted, off-shore dropoffs are an enormous resource of the same, at the cost of more difficult installation and operation in every regard. And if we exhaust THOSE, water towers at sea or underground reservoirs on land can be constructed as well.

All of this, of course, is dumb and America should just take the leash off nuclear, as argued here. (I've not read it yet, but I expect it to make the points I would inline here.) That we haven't yet is a shame and a testament to our collective idiocy and Puritan hangover.

I wonder, is there anyone on The Motte who opposes nuclear power? Either because of concerns relating to safety, waste disposal and other "environmentalist" canards, or because it's supposedly uneconomical.

And if everyone here is pro-nuclear, why is that? Are mottizens just more rational than everyone else, or is it because of chronic contrarianism?

(How do we do the fancy quotes with user, timestamp, and maybe a link? It'd be useful here.)

Like embedding a Tweet? I don't think you can do that. But there's a "Copy link" button under every comment and you can put an @ in front of a username so that it links to their profile and they get notified.

Are mottizens just more rational than everyone else, or is it because of chronic contrarianism?

As a pro-nuclear «chronic contrarian»: we can't be relied upon to distinguish the latter from the former. But I'd say it's the diminished vulnerability to threat models that appear poorly substantiated. We don't put much stock in «something may happen» stories.

For the same reason many here tend to pooh-pooh «the coof», Trump's «attempt at fascist insurrection», the danger of Russia or China, AGI risk, climate change, whatever, even school shooting and violence. On the other hand, we are highly suspicious of risk narratives that seem to justify reduction of freedom in all senses – from direct political ones to basic freedoms of exploring space and enjoying material abundance; degrowth ideology doesn't appeal to us at all. Inasmuch as there are conservatives and reactionaries here who profess to respect Chesterton's fences and the precautionary principle, it's not consistent but restricted to domains where change and action is heavily enemy-coded and in some ways still Puritan, statist and restrictive (e.g. CRT programming in schools).

Put another way, we aren't very contrarian. We're just non-neurotic males with a typical masculine attitude toward minor risks and risky-seeming things. The broader society and its consensus is… less like this.

Case in point:

It’s also enraged a bloc of stoutly anti-nuclear countries that includes Germany and Austria. Seven of them wrote a joint letter earlier this month warning that including nuclear-generated hydrogen could “jeopardize the achievement of … climate targets” and reduce ambitions on renewables.

“The attempt to declare nuclear energy as sustainable and renewable must be resolutely opposed,” Austrian Energy Minister Leonore Gewessler said after the deal.

Nuclear is quite bad if 1) you focus on tail risk of disasters (Chernobyl, Three Mile, Fukushima…) or mistaken estimates for base level harmfulness (such as consequences of waste leaks) and/or 2) evaluate nuclear by its cost per unit of output in the context of prohibitively expensive safety measures predicated upon its danger (assessments, plant designs and, again, secure waste storage over millenia). Put in the proper quantitative context, it's less dangerous per unit of power than most other energy sources. But there's no way to make coal or solar seem so spooky to a layman. I mean –wind, sun, it's all so nice, living in harmony with nature, what could go wrong! So what if we'll need to restrain our capitalist greed and consume a little less, give some rest to our mother Earth! Indeed, it'd be a positive if we got rid of capitalism even without any ecological benefit, some could say that's the whole point. Also, the precariousness of nature also means one can feel morally superior on account of normie unambitious urbanite life choices.

The optics accessible to midwits are just bad, built into every facet of culture from fiction tropes about evil power sources to signs on trash containers; whatever your nerdy arguments, generations of shallow artists competing for NGO grants (with the intent to suffocate, debase and diminish humanity under the guise of rational planning) have conscientiously labored to make it this way.

Not much to do about it now but remind them of the human cost of their actions, meticulously calculated.

The broader society and its consensus is… less like this.

Well, yeah; they don't currently perceive the barbarians are at the gates.

And unfortunately for those [men] whom the existence of barbarians is a time-tested way to extract payment and investment from broader society in exchange for security guarantees (and has been since the dawn of humankind), they're correct; this is why the entire society must rationalize its newly-enabled refusal to pay them.

Hence, degrowth as religion; men staying in one's parents' household until they're dead would in a normally-functioning society be hideously perverse, but it's certainly a clear reminder of the human cost of the actions of their social cohort (and probably the rational thing to do in a society like this).

Yes, investing in growth is objectively the right thing to do, and will make the society even stronger in the long run, but why do that when you can just hoard your gains until death takes them from you?

For the same reason many here tend to pooh-pooh «the coof», Trump's «attempt at fascist insurrection», the danger of Russia or China, AGI risk

Do people on the Motte not take AGI risk seriously? I thought I was the only one here who thought it was overblown.

Most people here seem to take it very seriously although metacontrarians exist.

For me, AI risk is completely different to all nearly other x-risks including asteroids, nuclear war, climate change, etc... Because the risk from AI cannot be quantified. I ask myself, what would a superintelligence do? I have no fucking clue. And neither does anyone else. People saying, "I'm not worried about X, I'm worried about Y" are missing the point. While it's fun to speculate about X or Y, it is impossible to predict what a superintelligence will do. It's a true unknown unknown. AI risk is nearly unique in that way.

No, the whole point of what you believe is «metacontrarianism» is that it's entirely possible to predict what a superintelligence will do, when we know what it has been trained for and how exactly it's been trained. Terry Tao is a mathematical superintelligence compared to an average human. What will he do? Write stuff, mainly about mathematics. GPT-4 is a superintelligence in the realm of predicting the next token. What will it do? Predict next token superhumanly well. AlphaZero is a tabletop game superintelligence. What will it do? Win at tabletop games. And so it goes.

Intelligence, even general intelligence, even general superintelligence, is not that unlike physical strength as the capacity to exert force: on its own, as a quantity, it's a directionless, harmless capability to process information. Instrumental convergence for intelligence, as commonly understood by LWers, is illiterate bullshit.

What I admit we should fear is superagency, however it is implemented; and indeed it can be powered by an ASI. But that's, well, a bit of an orthogonal concern and should be discussed explicitly.

I'm sure you know about mesaoptimizers. Care to explain why that doesn't apply to your thesis?

That said, I'm not particularly married to any one particular flavor of AI risk. I'm taking the Uncle Vito approach. The AI naysayers have been consistently wrong for the last 5 years, whereas the doomers keep being proven correct.

I know what people have written about mesa-optimizers. They've also written about the Walugi effect. I am not sure I «know» what mesa-optimizers with respect to ML are. The onus is on those theorists to mechanistically define them and rigorously show that they exist. For now, all evidence that I've seen has been either Goodhart/overfitting effects well-known in ML, or seeing-Jesus-in-a-toast tier things like Waluigi.

To be less glib, and granting the premise of mesa-optimizers existing, please see Plakhov section here. In short: we do not need to know internal computations and cogitations of a model to know that the regularization will still mangle and shred any complex subroutine that does not dedicate itself to furthering the objective.

And it's not like horny-humans-versus-evolution example, because «evolution» is actually just a label for some historical pattern that individual humans can frivolously refuse to humor with their life choices; in model training, the pressure to comply with the objective bears on any mesa-optimizer in its own alleged «lifetime», directly (and not via social shaming or other not-necessarily-compelling proxy mechanisms) . Imagine if you received a positive or negative kick to the reward system conditional on your actions having increased/decreased your ultimate procreation success: this isn't anywhere near so easy to cheat as what we do with our sex drive or other motivations. Evolution allows for mesa-optimizers, but gradient descent is far more ruthless.

…Even that would be something of a category error. Models or sub-models don't really receive rewards or punishments, this is another misleading metaphor that is, in itself, predicated upon our clunky mesa-optimizing biological mechanisms. They're altered based on the error signal; results of their behavior and their «evolution» happen on the same ontological plane, unlike our dopaminergic spaghetti one can hijack with drugs or self-deception. « Reinforcement learning should be viewed through the lens of selection, not the lens of incentivisation».

Humans have a pervasive agency-detection bias. When so much depends on whether an agent really is there, it must be suppressed harshly.


The AI naysayers have consistently been wrong for the last 5 years, where the doomers keep being proven correct.

I beg to differ.

The doomers have been wrong for decades, and keep getting more wrong; the AI naysayers are merely wrong in another way. Yudkowsky's whole paradigm has failed, in large part because he's been an AI naysayer in all senses that current AI has succeeded. Who is being proven correct? People Yud, in his obstinate ignorance, had been mocking and still mocks, AI optimists and builders, pioneers of DL.

You are simply viewing this through the warped lens of Lesswrongian propaganda, with the false dichotomy of AI skepticism and AI doom. The central position both those groups seek to push out of the mainstream is AI optimism, and the case for it is obvious: less labor, more abundance, and everything good we've come to expect from scientific progress since the Enlightenment, delivered as if from a firehose. We are literally deploying those naive Golden Age Sci-Fi retrofuturist dreams that tech-literate nerds loved to poke holes in, like a kitchen robot that is dim-witted yet can converse in a human tongue and seems to have personality. It's supposed to be cool.

Even these doomers are, of course, ex-optimists: they intended to build their own AGI by 2010s, and now that they've made no progress while others have struck gold, they're going to podcasts, pivoting to policy advice, attempting to character-assassinate those more talented others, and calling them derogatory names like «stupid murder monkeys fighting to eat the poison banana».

Business as usual. We're discussing a similar thing with respect to nuclear power in this very thread. Some folks lose it when a technical solution makes their supposedly necessary illiberal political demands obsolete, and begin producing FUD.

Good point about mesaoptimizers and the difference between evolution and gradient descent.

The onus is on those theorists to mechanistically define them and rigorously show that they exist."

Here's where I disagree. As someone once said, "he who rules is he who sets the null hypothesis". I claim that the onus is on AI researchers to show that their technology is safe. I don't have much faith in glib pronouncements that AI is totally understood and safe.

Nuclear power, on the other hand, is well understood, has bounded downside, and is a mature technology. It's not going to destroy the human race. We can disprove the FUD against it. But in 1945, I might have felt differently.

More comments

Do people on the Motte not take AGI risk seriously?

I don't; I'm more afraid of the economic enclosure potential that will likely result, to say nothing of the power these tools will bestow upon the State. The last 60 years have been bad for civil rights and that was just the result of normal economic centralization; this, by contrast, is advanced centralization.

I know that I take it seriously, but I don't take it seriously because I think I'm going to be turned into a heap of paperclips or atomized by a T-1000. I take it seriously because I see something else coming, a paradigm shift in propaganda and narrative control powered by LLM's, image/video generators and AI-assisted search engines (I'll confess that I may be a little too unironically Kaczynski-pilled). I don't see how the future I envision is any less apocalyptic than the one our loveable quokkas fear, however.

Did you not see the AI threads the last week? There are plenty of us anti doomers here.