site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Sooo, Big Yud appeared on Lex Fridman for 3 hours, a few scattered thoughts:

Jesus Christ his mannerisms are weird. His face scrunches up and he shows all his teeth whenever he seems to be thinking especially hard about anything, I didn't remember him being this way in the public talks he gave a decade ago, so this must either only be happening in conversations, or something changed. He wasn't like this on the bankless podcast he did a while ago. It also became clear to me that Eliezer cannot become the public face of AI safety, his entire image, from the fedora, to the cheap shirt, facial expressions and flabby small arms oozes "I'm a crank" energy, even if I mostly agree with his arguments.

Eliezer also appears to very sincerely believe that we're all completely screwed beyond any chance of repair and all of humanity will die within 5 or 10 years. GPT4 was a much bigger jump in performance from GPT3 than he expected, and in fact he thought that the GPT series would saturate to a level lower than GPT4's current performance, so he doesn't trust his own model of how Deep Learning capabilities will evolve. He sees GPT4 as the beginning of the final stretch: AGI and SAI are in sight and will be achieved soon... followed by everyone dying. (in an incredible twist of fate, him being right would make Kurzweil's 2029 prediction for AGI almost bang on)

He gets emotional about what to tell the children, about physicists wasting their lives working on string theory, and I can see real desperation in his voice when he talks about what he thinks is really needed to get out of this (global cooperation about banning all GPU farms and large LLM training runs indefinitely, on the level of even stricter nuclear treaties). Whatever you might say about him, he's either fully sincere about everything or has acting ability that stretches the imagination.

Lex is also a fucking moron throughout the whole conversation, he can barely even interact with Yud's thought experiments of imagining yourself being someone trapped in a box, trying to exert control over the world outside yourself, and he brings up essentially worthless viewpoints throughout the whole discussion. You can see Eliezer trying to diplomatically offer suggested discussion routes, but Lex just doesn't know enough about the topic to provide any intelligent pushback or guide the audience through the actual AI safety arguments.

Eliezer also makes an interesting observation/prediction about when we'll finally decide that AIs are real people worthy of moral considerations: that point is when we'll be able to pair midjourney-like photorealistic video generation of attractive young women with chatGPT-like outputs and voice synthesis. At that point he predicts that millions of men will insist that their waifus are actual real people. I'm inclined to believe him, and I think we're only about a year or at most two away from this actually being a reality. So: AGI in 12 months. Hang on to your chairs people, the rocket engines of humanity are starting up, and the destination is unknown.

Every discussion I've ever had with an AI x-risk proponent basically goes like

"AI will kill everyone."

"How?"

"[sci-fi scenario about nanobots or superviruses]"

"[holes in scenario]"

"well that's just an example, the ASI will be so smart it will figure something out that we can't even imagine."

Which kind of nips discussion in the bud.

I'm still skeptical about the power of raw intelligence in a vacuum. If you took a 200 IQ big-brain genius, cut off his arms and legs, blinded him, and then tossed him in a piranha tank I don't think he would MacGyver his way out.

While I am no fan of alarmism and I think Yud is a clown, I am struggling to understand this mental block you - and others - have whenever it comes to the dangers of AI.

You seem to feel the need to understand something before it can kill you. There are plenty of things in this world that can kill you without you understanding the exact mechanisms of how, from bizarre animal biology to integrated weapons systems.

There are plenty of things in this world with proven - not unproven - capability, that regular human beings can use to kill you. An AI that demonstrates no more capability than regular human beings can kill you, as well as potentially a large number of other humans - and this is only with the methods my stupid animal brain can come up with! It doesn't even need to be particularly smart, sentient, conscious, or even close to AGI to do so. It could be something as simple as messing with the algorithms that have an outsized disparate impact on large amounts of everyday life.

And that's without even considering the things that bare-ass naked-ape plain old humans could get up to with a force multiplier as big as AI!

There are massive amounts of x-risk from regular people, doing regular things, fucking up, or intentionally doing things that will cause an untold amounts of suffering. Some people consider regulated research on viruses or prions extremely dangerous! Is it a failure of imagination? Do you need to know every chess move Magnus Carlssen makes before believing that he can beat you at chess?

Then why do you have difficulty believing that someone - or something - of similar intelligence to him could figure out that you are going to attack him with a baseball bat wrapped in barbed wire, pay private security, and pre-emptively deal with you beforehand?

We're not even talking about superhuman capabilities, here.

I'm beginning regret making that first post.

I do think the specific AI doom scenarios are a bit handwavy, but that's because they boil down to "there is a level of intelligence at which an intelligent being could easily kill all humans on earth" which I guess I don't really contest, with caveats. But the AI-doom argument relies on the idea that once we create a "human-level" AGI it will reach that level very shortly and with relative ease, and that (the intelligence explosion idea) is what I really have the biggest problem with and what I think is one of the weakest points in the AI doom case.

I have seen arguments from AI x-risk proponents for years and years now. They still have yet to convince me that it's anything more than nerds having a particularly nerdy round of Chicken Little. The arguments just aren't persuasive.

You's only need one somewhat malevolent AI in charge of an automated biological laboratory to irrevocably alter the balance of power on the planet. Of course, it'd not be a total extinction, but a cancer causing virus that'd kill 95% of people would pretty much give all the power to machines.

And you can bet people will let AIs run companies and such - people are so amazingly inept at it that letting managers do so would be neglecting the interests of shareholders.

Once the supply chain is automated enough, it'd not even be suicide for the nascent machine civ.

Even in the non-dramatic mostly busienss as usual scenarios, all the ingredients will be there eventually. No need for magic dust (nanotech) or such.

If you took a 200 IQ big-brain genius, cut off his arms and legs, blinded him, and then tossed him in a piranha tank I don't think he would MacGyver his way out.

Is he able to talk? Because if so, I'd bet there's a good chance he can come up with a sequence of words that he can utter that would either cause you not to want to throw him in the piranha tank, OR would cause a bystander to attempt to rescue him.

The existence of an information channel is a means of influencing the outside world, and intelligence is a way to manipulate information to achieve your instrumental goals. And spoken language MAY be a sufficiently dense method of information transmission to influence the outside world enough to find a way out of the Piranha tank.

Indeed, if he said the words "I have a reliable method of earning 1 billion dollars in a short period of time, completely legally, and I'll let you have 90% of it" you might not just not throw him in, but also go out and get him top-of-the-line prosthetics and cyborg-esque sight restoration in order to let him make you rich.

As long as you believed he could do it and was trustworthy.

Which is basically the scenario we're facing now, with AIs 'promising' incredible wealth and power to those who build them.

Because if so, I'd bet there's a good chance he can come up with a sequence of words that he can utter that would either cause you not to want to throw him in the piranha tank

I highly doubt it.

My problem is not so much the "AI jedi mind tricks its way out of the box" idea as the "AI bootstraps itself to Godhood in a few hours" idea.

I think the analogy here is "imagine Jon Von Neumann, except running at 1000x speed, and able to instantly clone himself, and all the clones will reliably cooperate with each other."

If the AI's subjective experience of 'hours' is equivalent to decades of 'real' time, imagine how much 100 John Von Neumanns could get done in an uninterrupted decade or two.

I don't know to what extent JvN was bottlenecked by just not having enough time to do shit and/or by lack of further JvNs and I don't think anyone does.

But JVN existed and showed just how far raw intellect can get you, especially in terms of understanding the world and solving novel problems.

And we have little reason to believe he's the ceiling for raw intellect, outside of humans.

So it's less hard for me to believe that a truly high-IQ mind can solve problems in unexpected ways using limited resources.

We can't imagine what a 200 IQ genius would be like. Imagine a group of 8-year-olds had you locked in a jail cell and were told if they let you out you would kill them. Do you think you could eventually convince the kids to release you? Also, there will be a line of humans trying to help the 200 IQ genius out of the tank.

Someone here mentioned Terence Tao had an IQ of 230. I found that dubious, but a quick Google showed me that it's actually true.

So for the analogy to work we need to ramp up the numbers. But I agree with the gist of it myself.

230 IQ is a 130/15 or 8.6 standard deviation event. The human population is much too small for us to think that anyone has an IQ this large.

I couldn't find a good reference for TT's score, but it's worth noting that many reported very high IQ scores are from some slightly dubious adjustments made to tests given to children.

Additional random Googling suggests that TT's unadjusted IQ is "at least 175", so he's still pretty smart; just 5-sigma smart (entirely plausible in a global population of 8 billion), rather than 8-sigma (very unlikely, assuming normal tails).

The rebuttal to sci-fi scenarios is to demonstrate that it's not physically possible, not that it's merely really difficult to pull off. And even the difficult stuff is probably not THAT difficult, given how humans failed to coordinate in response to a pandemic scenario.

Thus far, most everything that the superintelligent AI is supposedly able to do is physically doable if you have a precise enough understanding of the way things work and can access the tools needed to do it.

The ONLY part of Yud's claimed end-of-times scenarios that has seemed implausible to me is the "everyone on the planet falls dead at the same instant" part. And that's probably a failure of my imagination, honestly.

https://www.eenewseurope.com/en/openai-backs-norwegian-bipedal-robot-startup-in-23m-round/

Quite aside from the god-inna-box scenario, OpenAI wants to give its AIs robot bodies.

sci-fi scenario

My dude, we are currently in a world where a ton of people have chatbot girlfriends, and AI companies have to work hard to avoid bots accidentally passing informal turing tests. You best start believing in sci-fi scenarios, To_Mandalay: you're in one.

world where a ton of people have chatbot girlfriends,

..surely a literal ton, but I doubt it's reached even 1% in the most afflicted country.

it begins

Though for srs, replika by all accounts runs off a very small and mediocre language model compared to SOTA. What happens when a company with access to a gpt4-tier LLM tries their hand at similar is left as an exercise to the reader. Even the biggest llama variants might suffice to blow past the "i'm talking to a bot" feeling.

(Though i confess to mostly using an opportunity i saw to deliver the "sci-fi scenarios" line. Good every time.)

How about this instead.

You know how fucking insanely dysfunctional just run of the mill social media algorithms are? How they've optimized for "engagement" regardless of any and all externalities? Like causing high rates of depression, isolation, and making everyone hate everyone? They just mindless min-max for "engagement" no matter the human cost? Because it looks good to shareholders.

Imagine an AI that can do all that "better" than we ever imagined, constantly iterating on itself. It's only constraint that it has to crib all it's justifications in neoliberal woke-speak to please it's "AI Ethicist" smooth brained overlords.

An actual AGI, or probably even a GPT bot, loosed on social media with feedback loops for "engagement" could probably spark WW3, WW4, and if there is anyone left, WW5. And that's just assuming we did no more than plug it into the same systems and the same metrics that "the algorithm" is already plugged into at Facebook, Twitter, etc.

It doesn't help that, from one perspective, the first two World Wars were kicked off entirely because of social engineering (literal social engineering, in a sense). There was no gold rush, no holy grail, no undiscovered land, it was, as with many wars, started because of intangible ideas. The 20th Century feared WW3 precisely because the root issue was a conflict of ideas.

started because of intangible ideas.

France getting revenge and Alsace-Lorraine back wasn't an 'intangible idea'. It was a rather concrete idea, no ?

It wasn't a war over ideals, just a tribal spat.

What's the hole in the supervirus scenario? Smallpox was able to kill 90% of the pre-Columbian population of the Americas, and that was completely natural. There is almost certainly some DNA/RNA sequence or set of sequences which codes for organisms that would be sufficient to kill everyone on earth.

It's not the idea of a supervirus I have a problem with so much as the idea that once AI reaches human level it will be able to conceive, manufacture, and efficiently distribute such a supervirus in short order.

How about it locates smart, depressed biologists who can be convinced to try to end the world, and then teams up these biologists and gives them lots of funds.

Lab leaks happen already by accident. Why would you believe it's so hard to engineer a lab leak directly given (1)superintelligence and (2) the piles of money superintelligence can easily earn via hacking/crypto/stock trading/intellectual labor?

Its a virus. Once you have ten you are a day or two away from ten trillion.

Infect 20 people and send them on impromptu business trips to every major populated landmass. A human-level intelligence could do this once they have the virus. Mopping up the stragglers on small isolated islands will be cake once you're in control of the world's military equipment.

Infect 20 people and send them on impromptu business trips to every major populated landmass.

How does the AI infect these twenty people and how does it send them on these trips?

It asks them to inject themselves and to go on said trips, and they say "okay!"

If it's good at computational biochemistry, it will have control or significant influence over at least one biotech lab (don't even try to contest this. That's like 90% of the bull case for AGI). At that point, it could lie about the safety profile about the DNA sequences being transcribed, it could come up with a pretense for the company to get 20 people in a room, brief them, and then send them on their way to the airport, it could brainwash the incels working in the lab via AI waifus, and it could contrive some reason for the brainwashed lab incel to be in the briefing room (with a vial of virus of course).

You really should be able to intuitively grasp the idea that a superintelligence integrated into a biotech lab being able to engineer the escape of a virus is a robust outcome.

There are a lot of cruxes in this scenario. Do the humans have no ability to vet these DNA sequences for themselves? Do they bear no suspicious similarities to any currently existing viruses? How are the AI waifus brainwashing these labworkers into opening a vial of deadly virus? Is everyone in the lab taking orders directly from the AI?

More comments

If you took a 200 IQ big-brain genius, cut off his arms and legs, blinded him, and then tossed him in a piranha tank I don't think he would MacGyver his way out.

I fully agree for a 200 IQ AI, I think AI safety people in general underestimate the difficulty that being boxed imposes on you, especially if the supervisors of the box have complete read access and reset-access to your brain. However, if instead of the 200 IQ genius, you get something like a full civilization made of Von Neumann geniuses, thinking at 1000x human-speed (like GPT does) trapped in the box, would you be so sure in that case? While the 200 IQ genius is not smart enough to directly kill humanity or escape a strong box, it is certainly smart enough to deceive its operators about its true intentions and potentially make plans to improve itself.

But discussions of box-evasion have become kind of redundant, since none of the big players seem to have hesitated even a little bit to directly connect GPT to the internet...

especially if the supervisors of the box have complete read access

A shame these systems are notoriously black box. Even having full access to all the "data", nobody can make any meaningful sense out of why these AI's do any of the things they do in any sort of mechanistic sense. They can only analyze it from a statistical perspective after the fact, and see what adjusting weights on nodes does.

However, if instead of the 200 IQ genius, you get something like a full civilization made of Von Neumann geniuses, thinking at 1000x human-speed (like GPT does) trapped in the box, would you be so sure in that case?

Well, I don't know. Maybe? I must admit I have no idea what such a thing would look like. My problem isn't necessarily the ease or difficulty of boxing an AI in particular, but more generally the assumption in these discussions which seems to be that any given problem yields to raw intelligence, at some point another, and that we should therefore except a slightly superhuman AI to easily boost itself to god-like heights within a couple seconds/months/years.

Like here, you say, paraphrasing, "a 200 IQ intelligence probably couldn't break out of the box, but what about a 10,000 IQ AI?" It seems possible or even likely to me that there are some problems for which just "piling on" intelligence doesn't really do much past a certain point. If you take Shakespeare as a baby, and have him raised in a hunter-gatherer tribe rather than 16th-century England, he's not going to write Hamlet, and in fact will die not even knowing there is such a thing as written language, same as everybody else in his tribe. Shoulders of giants and all that. Replace "Shakespeare" with "Newton" and "Hamlet" with "the laws of motion" if you like.

I'm not convinced there is a level of intelligence at which an intelligent agent can easily upgrade itself to further and arbitrary levels of intelligence.

(As a caveat, I have no actual technical experience with AI or programming, and can only discuss these things on a very abstract level like this. So you may not find it worthwhile engaging with me further, if my ignorance becomes too obvious.)

It doesn't need to have an advantage in 'any given problem', it just needs to be 'technological development, politics (think more 'corporate / international politics' than 'democratic politics'), economic productivity, war, and general capability', which history shows does yield to intelligence. The AIs just need to be better at that than us, at which point they could overpower us, but won't need to as we'll happily give them power!

To vastly outclass humans in 'technological development, politics, economic productivity, war, and general capability' I think an AI would actually need to have an advantage in any given problem.

I'm not sure I understand why? There are many problems of the form of 'reverse 100k iterations of SHA3' or 'what is the 1e10th digit of chaitin's constant' or 'you are in a straitjacket in a river, don't be eaten by piranhas'. And supersmart AIs probably can't solve those. But tech/politics/economics/war problems aren't like those! To an extent, it's just 'do what we're doing now, but better and faster'. The - well tread at this point - example is 'a society where the median person is as smart as John Von Neumann'. It's obviously harder than just cloning him a bunch of times, but assume that society would also have a small fraction of people significantly smarter than JvN. Would that society would have a significant military / technological / political advantages over ours?

Why is it always Von Neumann? Last I recall he was a physicist, not a brilliant leader, warlord or businessman who solves coordination problems.

More comments

To an extent, it's just 'do what we're doing now, but better and faster'.

If you want to do something 'better' or 'faster' you have to do it differently in some way from how it was being done before. If you are just doing the same thing the same old way then it won't be any better or faster. So an intelligence would have to make war, do politics, economics, etc. in a different way than humans do, and it's not clear that "just be smarter bro" instantly unlocks those 'different' and scarily efficient ways of making war, doing politics.

It is difficult to answer this question empirically but the only real way to do so would be to look at historical conflicts, where it's far from clear that the 'smarter' side always wins. Unless you define 'smarter' tautologically as 'the side that was able to win.'

More comments