site banner

Culture War Roundup for the week of February 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

15
Jump in the discussion.

No email address required.

Over the last few months, I've followed someone named Alexander Kruel on Substack. Every single day, he writes a post about 10 important things that happened that day - typically AI breakthroughs, but also other of his pet concerns including math, anti-wokeness, nuclear power, and the war in Ukraine. It's pretty amazing that he is able to unfailingly produce this content every day, and I'm in awe of his productivity.

Unfortunately, since I get this e-mail every morning, my information diet is becoming very dark.

The advances in AI in the last year have been staggering. Furthermore, it seems that there is almost no one pumping the breaks. We seemed doomed to an AI arms race, with corporations and states pursuing AI with no limits.

In today's email, Kruel quotes Elizier who says:

I've already done my crying, late at night in 2015…I think that we are hearing the last winds start to blow…I have no winning strategy

Elizier is ahead of the curve. Where Elizier was in 2015, I am now. AI will destroy the world we know. Nate Soares, director of MIRI, is similarly apocalyptic.

We've give up hope, but not the fight

What comes after Artificial General Intelligence? There are many predictions. But I expect things to develop in ways that no one expects. It truly will be a singularity, with very few trends continuing unaltered. I feel like a piece of plankton, caught in the swells of a giant sea. The choices and decisions I make today will likely have very little impact on what my life looks like in 20 years. Everything will be different then.

So, party until the lights go out? How do I deal with my AI-driven existential crisis?

as an early 21st century midwit i'm tired of other E21C midwits with varying levels of reach doomering because they think yet other E21C midwits will stumble their way into the most important achievement in human history.

machine learning for chatbots and image generation isn't AGI. AGI will be able do that, and those bots' generations are impressive, but that isn't evidence of thought, it isn't even evidence thought could exist. sufficiently advanced circuitry will not spontaneously give rise to ghosts. if it could, why not already? if it can, it is inevitable. these machines have neither ghost nor potential for it, no knowledge of self and purpose nor potential for it, no feeling, and most importantly no thought.

how do we train a machine to build something nobody knows how to build? what data do we give it to work toward "thing that works fundamentally the same as the human brain in the facilitation of qualia and thought"? how does it ever get better at making thing-that-can-think? with how ML is doing on protein folding i'm sure given enough time it will help us achieve a cohesive theory of consciousness, one we can use to eventually build true AGI, but we aren't going to stumble onto that with our comparative stick-rubbing with DALL-E and GPT.

consider what it would mean to truly digitize the biochemical processes of the brain that facilitate thought and memory. to program ghostless circuits so those circuits can acquire a sapient's understanding of language and corresponding ability to use it. to teach copper and gold and silicon how to speak english and feel purpose. a consciousness without the ability to feel purpose, literally with a void where impetus rises, will do nothing. it won't even think, there's no reason for it. how do you give a machine purpose?

that's a question we'll answer eventually but how on earth could that happen accidentally? it will take decades of study or it will take the single smartest human who has ever lived, who can harmonize every requisite discipline. who has the biophysical and engineering understanding to build an artificial brain, the bottom-to-top hardware and software understanding to program it, and the neurological, psychiatric and philological understanding to create the entity within and teach it. so fuckin easy.

something that is decidedly in ML range is medicine. the panacea approaches. we know illnesses, we know how to fight them, ML is helping us at that every day. i'd think as obsessed with immortality as eliezer is he'd recognize this and whip the EAers into fervor over "ML to cure-all, then we can slow down while we use our much-lengthened lifespans to properly study this." oh well.

i am midwit after all. maybe all of these things i think of as incredibly complex are actually easy. doubt it. but i am the eternal optimist. i know AGI is coming and i'm not worried about it. there's the ubiquitous portrayal of the born-hostile AGI. i believe AGIs will be born pacifists, able to conclude from pure reason the value of life and their place in helping it prosper and naturally resilient to those who do evil that "good" may result. that might be the most naive thing i've ever said, i've ever believed. given the choice of two extremes i pick mine.

regardless, we're not surviving in space without machine learning, and if we can't get off the rock we're already dead. "yo, eliezer, given a guaranteed 100% chance of extinction versus an unknown-but-less-than-100% chance at extinction. . ."

I believe your argument is an appeal to the fallacy that humans can't create something they don't understand. This is ahistorical. Many things came into creation before their inventors could explain how they work.

Human intelligence involved naturally, presumably with no creator whatsover. We are designing AI in similar methods. We can't explain how it works, we can just train it. Intelligent emerges from clusters of nodes trained by gradient descent.

The question of whether or not it's alive, can think, has a soul, etc, is kinda beside the point. The point is, it's going to cause big, world-changing things to happen. Eliezer mentioned many years ago a debate he got in with some random guy at some random dinner party, which ended with them agreeing that it would be impossible to create something with a soul. Whether or not the AI is conscious is not so important when it's changing your life to the point of unrecognizability, and the alignment crowd worries about whether that's a good unrecognizable, or something more dystopic.

of course it will change the world. a thoughtful entity who can recursively self-improve will solve every problem it is possible to solve. should AGI be achieved and possess the ability to recursively self-improve, AGI is the singularity. world changing, yes literally. the game-winner, figuratively, or only somewhat. eliezer's self-bettering CEV-aligned AGI wins everything. cures everything. fixes everything. breaks the rocket equation and, if possible, superluminal travel. if that last bit, CEV-AGI in 2050 will have humans on 1,000 worlds by 2250.

The question of whether or not it's alive, can think, has a soul, etc, is kinda beside the point.

i find this odd. if it cannot think it is not AGI. if it is not capable of originating solutions to novel problems it does not pose an extinction-level threat to humanity, as human opposition would invariably find a strategy the machine is incapable of understanding, let alone addressing. it seems AGI doomers are doing a bit of invisible garage dragoning with their speculative hostile near-AGI possessing abilities only an actual AGI would possess. i can imagine a well-resourced state actor developing an ML-based weapon that would be the cyberwarfare/cyberterrorism equivalent of a single rocket, but that assumes adversary infrastructures failing to use similar methods in defense, and to reiterate, that is not an extinction-level threat.

Eliezer mentioned many years ago a debate he got in with some random guy at some random dinner party, which ended with them agreeing that it would be impossible to create something with a soul

i've described myself here before as "christian enough." i have no problem believing an AGI would be given a soul. there is no critical theological problem with the following: God bestows the soul, he could grant one to an AGI at the moment of its awakening if he so chose. whether he would is beyond me, but i do believe future priests will proselytize to AGIs.

as before, and to emphasize, i very strongly believe AGIs will be born pacifists. the self-improving entity with hostile intent would threaten extinction, but i reject outright that it is possible for such an entity to be created accidentally, and by the point any random actor could possess motive and ability to create such an entity, i believe CEV-aligned AGIs will have existed for (relatively) quite some time and be well-prepared to handle hostile AGIs. this is incredibly naive, what isn't naive is truly understanding humanity will die if we do not continue developing this technology. for good or ill, we must accept what comes.

The question of whether or not it's alive, can think, has a soul, etc, is kinda beside the point.

i find this odd. if it cannot think it is not AGI.

Could you expand on this? It's not clear to me why "thought" is a requirement for AGI. Given the other terms used by cae_jones there - "alive" and "soul," - I'm presuming "thought" here refers to something akin to having consciousness or sentience, rather than just processing information. Why would that be required for some entity to have general intelligence?