site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

It’s the NYT, so it’s hard to tell for sure how big of a deal this is, but it sounds like this guy taught Ilya Sutskever.

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

One of the lines I see from techno-optimists and e/acc is that the people actually building the technology don’t believe in doom. It’s just the abstract philosophers on the sidelines freaking out because they don’t know anything. Unfortunately, this feels like the kind of move you only get if the people at the cutting-edge are nervous. Hinton must have been raking in cash, but he thought this was more important.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

Of course, it wouldn’t be a Cade Metz article without allegations of dishonest reporting:

In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.

the people actually building the technology don’t believe in doom.

I don’t see why the people building the technology should be taken to be any more informed than the average interested layman on this point.

An AI that’s intelligent enough to be an x-risk is, as of today, a purely hypothetical entity. No one can have technical expertise regarding such entities because we have no empirical examples to study. No one knows how it might behave, what its goals might be, how easy it would be to align; one guess is as good as any other.

Professional AI researchers could have technical expertise regarding questions about the rate of AI progress, or how close we may or may not be to building an x-risk level AI; but given disagreement in the field over even basic questions like “are LLMs alone enough for AGI or will they plateau?” I think you could find a professional opinion to support any position you wanted to take.

Thus even the most informed AI researcher’s views on doom and utopia should be viewed primarily as a reflection of their own personal ideological disposition towards AI, rather than as being the result of carefully considered technical arguments.

I don’t see why the people building the technology should be taken to be any more informed than the average interested layman on this point.

Because the technology is complicated and they actually know how it works?

Because the layperson is basing their understanding on movies, like the terminator?

Really? Honest question, do you think you have a good understanding of AI?

Sorry if I didn't make it clear enough: when I said "doom" I was specifically thinking of Yudkowskian nanobot doom. No one on earth has technical expertise regarding such technology, because it doesn't exist. No one know how to build it or how it would behave once built.

Honest question, do you think you have a good understanding of AI?

No, but nothing in my post relied on such a (technical) understanding.

I was specifically thinking of Yudkowskian nanobot doom.

I misunderstood you. My mistake.