site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Sooo, Big Yud appeared on Lex Fridman for 3 hours, a few scattered thoughts:

Jesus Christ his mannerisms are weird. His face scrunches up and he shows all his teeth whenever he seems to be thinking especially hard about anything, I didn't remember him being this way in the public talks he gave a decade ago, so this must either only be happening in conversations, or something changed. He wasn't like this on the bankless podcast he did a while ago. It also became clear to me that Eliezer cannot become the public face of AI safety, his entire image, from the fedora, to the cheap shirt, facial expressions and flabby small arms oozes "I'm a crank" energy, even if I mostly agree with his arguments.

Eliezer also appears to very sincerely believe that we're all completely screwed beyond any chance of repair and all of humanity will die within 5 or 10 years. GPT4 was a much bigger jump in performance from GPT3 than he expected, and in fact he thought that the GPT series would saturate to a level lower than GPT4's current performance, so he doesn't trust his own model of how Deep Learning capabilities will evolve. He sees GPT4 as the beginning of the final stretch: AGI and SAI are in sight and will be achieved soon... followed by everyone dying. (in an incredible twist of fate, him being right would make Kurzweil's 2029 prediction for AGI almost bang on)

He gets emotional about what to tell the children, about physicists wasting their lives working on string theory, and I can see real desperation in his voice when he talks about what he thinks is really needed to get out of this (global cooperation about banning all GPU farms and large LLM training runs indefinitely, on the level of even stricter nuclear treaties). Whatever you might say about him, he's either fully sincere about everything or has acting ability that stretches the imagination.

Lex is also a fucking moron throughout the whole conversation, he can barely even interact with Yud's thought experiments of imagining yourself being someone trapped in a box, trying to exert control over the world outside yourself, and he brings up essentially worthless viewpoints throughout the whole discussion. You can see Eliezer trying to diplomatically offer suggested discussion routes, but Lex just doesn't know enough about the topic to provide any intelligent pushback or guide the audience through the actual AI safety arguments.

Eliezer also makes an interesting observation/prediction about when we'll finally decide that AIs are real people worthy of moral considerations: that point is when we'll be able to pair midjourney-like photorealistic video generation of attractive young women with chatGPT-like outputs and voice synthesis. At that point he predicts that millions of men will insist that their waifus are actual real people. I'm inclined to believe him, and I think we're only about a year or at most two away from this actually being a reality. So: AGI in 12 months. Hang on to your chairs people, the rocket engines of humanity are starting up, and the destination is unknown.

Lex is also a fucking moron throughout the whole conversation, he can barely even interact with Yud's thought experiments of imagining yourself being someone trapped in a box, trying to exert control over the world outside yourself, and he brings up essentially worthless viewpoints throughout the whole discussion.

So yeah. This was the first time I ever listened to/watched one of Fridman's interviews. He seemed to burst onto the scene out of nowhere around a year ago. And everything I gathered from secondhand testimony and snippets of his content, and his twitter feed, led me to make an assumption:

The guy is basically trying to bootstrap himself to become the Grey-Tribe version of Joe Rogan.

And after hearing this interview, I updated MASSIVELY in favor of that model. It wasn't quite like he's just cynically booking 'big name' guests who appeal to the nerdy corners of the internet, and doesn't care about the actual quality of discussion. He appears to be making an effort.

Yet his approach to the interview seems to be MUCH less based on engaging with the thoughts of the guest but more pressing them on various buzzword-laden 'deep' questions to see if they'll give him a catchy soundbite or deep-sounding 'insight' on a matter that is, to put it bluntly, pseudospiritual. He's in there asking if 'consciousness' is an important feature of intelligence and whether that is what makes humans 'special' and if we could preserve conciousness in the AGI would that help making it friendly? Like kinda playing with the idea that there's something metaphysical (he would NEVER use the term supernatural I'm sure) and poorly understood about how human thought processes work that gives them a higher meaning, I guess?

And EY has basically written at length explaining his belief that consciousness isn't some mysterious phenomena and it is in fact completely explainable in pure reductionist, deterministic, materialist terms without making any kind of special pleading whatsoever, and thus there's no mystical component that we need to 'capture' in an AGI to make it 'conscious.'

As you say, his blatant dodge on the AI box questions AND, I noticed, his complete deflection when EY literally asked him to place a bet on whether there'd be a massive increase in funding for AI alignment research due to people 'waking up' to the threat (you know, the thing EY has spent his life trying to get funding for) betrays a real lack of, I dunno, honest curiosity and rigor in this thought process? Did the guy read much of EY's writings before this point?

Its almost the same shtick Rogan pulls where he talks to guests (Alex Jones for example) about various 'unexplained' phenomena and/or drug use and how that shows how little we really know about the universe, "isn't that just crazy man?" But avoiding the actual spiritualist/woo language so the Grey Tribe isn't turned off.

At least the guys in the Bankless Podcast noticed right away they were beyond their depth and acted more like a wall for EY to bounce his thoughts off.


As for EY.

Man. I think the guy is actually a very, very talented writer and is clearly able to hold his own in a debate setting on a pure intellectual level, he's even able to communicate the arguments he's trying to make in an effective manner (if, unlike Lex, the other party is conversant in the topics at hand).

He even has an ironic moment in the interview, saying "Charisma isn't generated in the liver, it's a product of the brain" or some such. And yet, he does not seem to have done much beyond the bare minimum to assuage the audience's "Crank detector." Its not clear that his persuasive powers, taken as a whole, are really up to the task required to win more people to his side.

Of course, for those only listening in rather than watching, that won't matter.

I'm not saying EY should just bite the bullet and work out, take some steroids, get 'jacked,' wear nice suits, and basically practice the art of hypnotizing normies in order to sneak his ideas past their bullshit detectors.

But... I'm also kinda saying that. He KNOWS about the Halo Effect, so within the boundaries set for him by genetics he should try to optimize for CHA. Doubly so if he's going on a large platform to make a last-ditch plea for some kind of sanity to save the human race. MAYBE a trilby isn't the best choice. I would suggest it would be rational to have a professional tailor involved.

But I do grok that the guy is pretty much resigned to the world he's been placed in as a fallen, doomed timeline so he probably sees any additional effort to be mostly a waste, or worse maybe he's just so depressed this is the best he can actually bring himself to do. And it is beyond annoying to me when his 'opponents' focus in on his tone or appearance as reasons to dismiss his argument.

And to make a last comment on Fridman... he clearly DOES get that CHA matters. His entire look seems engineered to suggest minimalistic sophistication. Sharp haircut, plain but well-fitted suit, and of course the aforementioned buzzwords that will give the audience a little bit of a tingle when they hear it, thinking there's real meaning and insight being conveyed there.

For what it’s worth, the impression he left on me was primarily that he is unwell. Depression at the very least, but I get a strong whiff of something else.

He's more or less admitted that he is in a state of despair as to humanity solving the alignment issue before a sufficiently intelligent AI arises that will likely end life as we know it, for the worse in the vast majority of cases.

So he's in a position where he believes his (and everyone's) days on this earth are numbered and so he would want to arrange his remaining days so as to be pleasant as possible whilst still holding out some hope of pulling through.

In his appearance and mannerisms, he irresistibly reminds me of a particular relative with a personality disorder. As a halfwit normie, my best option is to ignore him.

Yep. He has never taken any real steps to craft his image for maximum normie appeal. From the viewpoint of spreading his ideas amongst a receptive audience, this was never a concern. People weighed the arguments he made on their own merits, and he built up his credibility the hard way.

But now that he has to convince a larger audience of normie-types to take him seriously, and he has never developed the knack for 'hypnotizing' them via dress, speech, and mannerisms so as to exude the 'aura' of trustworthiness.