site banner

Culture War Roundup for the week of April 3, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

12
Jump in the discussion.

No email address required.

Yet another Eliezer Yudkowsky podcast. This time with Dwarkesh Patel. This one is actually good though.

Listeners are presumed to be familiar with the basic concepts of AI risk, allowing much more in-depth discussion of the relevant issues. The general format is Patel presenting a series of reasons and arguments that AI might not destroy all value in the universe, and Yudkowsky ruthlessly destroying every single one. This goes on for four hours.

Patel is smart and familiar enough with the subject material to ask the interesting questions you want asked. Most of the major objections to the doom thesis are raised at some point, and only one or two survive with even the tiniest shred of plausibility left. Yudkowsky is smart but not particularly charismatic. I doubt that he would be able to defend a thesis this well if it were false.

It feels like the anti-doom position has been reduced to, “Arguments? You can prove anything with arguments. I’ll just stay right here and not blow myself up,” which is in fact a pretty decent argument. It's still hard to comprehend the massive hubris of researchers at the cutting-edge AI labs. I am concerned that correctly believing yourself capable of creating god is correlated with falsely believing yourself capable of controlling god.

I wish there was some way to make a counter bet . You cannot even make bets against these companies given they are private. There are betting markets but part of the problem is there is still no agreement about what it means for an AI to be aware, sentient, etc. or what an AI disaster or catastrophe would entail short of the world coming to an end. But my on take is, these fears are way overblown. The math or science of AI is real, but the doomsday argument seems like pseudoscience.

Here is how I would thwart AGI and future GPTs that does not involve regulation or moratorium : create a competing company and offer much more lucrative salaries to attract AI researchers away from Open AI and elsewhere, and then have them produce something which is harmless or useless. Although this does not address the possible threat of China.

How can this technology not make money? I saw one guy who talked with GPT-3 for a bit and got it to make a game. Idea, code and UI all made by machine, albeit human selected from many options. Sumplete, it's a bit like sudoku. Not a bad game!

The growth curve of users on ChatGPT (all but obsolete nowadays with GPT-4) is just a straight vertical line. What could be more profitable than that? There's clearly a huge market to be tapped. And your investments are all AI companies! META, Tesla, TQQQ, MSFT. Facebook gets money from ads, using its algorithm (AI) and they have a big AI research department. Tesla and self-driving, Microsoft and GPT... How can anyone start a company that competes with the biggest companies in the world for talent and get them to do useless work?

/images/16809327288607266.webp

The growth curve of users on ChatGPT (all but obsolete nowadays with GPT-4) is just a straight vertical line. What could be more profitable than that?

It's hard to turn users and growth into profits. Facebook, Alphabet do that easily.

And your investments are all AI companies! META, Tesla, TQQQ, MSFT. Facebook gets money from ads, using its algorithm (AI) and they have a big AI research department. Tesla and self-driving, Microsoft and GPT.

Humans use air, yet there is no way to make money from that. Many websites use php, yet you cannot really invest in php or C++. Just because companies use "X" does not mean profiting from "X" is possible. "Using AI" does not imply "Open AI is a good investment"

Open AI got a lot of users in part because of media hype and the WWW being much bigger today compared to a decade+ ago with the other sites, and also partnership with MSFT. The source below says 13 million daily users, which is a small fraction of the alleged 100 million users. It's possible this 100 million is inflated by lumping all Microsoft users, like outlook or windows, with Open Ai.

https://www.businesstoday.in/technology/news/story/open-ais-chatgpt-hits-100-million-users-makes-history-as-fastest-growing-app-368753-2023-02-03

By comparison, facebook had 350+ million daily users even as far back as early 2011

https://www.statista.com/graphic/1/346167/facebook-global-dau.jpg

It's hard to turn users and growth into profits.

Give it time. ChatGPT came out 5 months ago and people are already doing pretty cool things with it. Google took 3 years till it turned a profit. Facebook took 5.

Humans use air, yet there is no way to make money from that. Many websites use php, yet you cannot really invest in php or C++.

Air is non-rivalrous and non-excludable, it and PHP is in a totally different reference class to this kind of AI service. It clearly is rivalrous and excludable. AI is not a public good.

It's possible this 100 million is inflated by lumping all Microsoft users, like outlook or windows, with Open Ai.

That's false. I made an account specifically for OpenAI to use ChatGPT. Plus there are more than 100 million Windows and Microsoft users.

I’m guessing they’re counting everyone who’s used Bing for search since incorporating GPT, not just people who’ve chatted with it.

That's false. I made an account specifically for OpenAI to use ChatGPT. Plus there are more than 100 million Windows and Microsoft users.

I was just speculating on how they got 100 million. Making an account to test the service is not the same as returning to it, like in the case of Facebook or Instagram, in which people are addicted to the dopamine of social media and keep coming back. The daily user count is 13 million, is decent but still a long way from even 2011 Facebook. People are not going to pay money for something they use infrequently...some will but not enough to make this as valuable as Meta.

I am trying to think how I could use Chat GPT in my everyday life. Maybe as a research assistant?

Give it time. ChatGPT came out 5 months ago and people are already doing pretty cool things with it. Google took 3 years till it turned a profit. Facebook took 5.

As of Jan 2023, Open Ai is valued at $30 billion, maybe even more now . So a lot of growth and earnings is already priced in, hence why I would be keen on betting against it now. Even Facebook in 2009 was valued at only $6.5 billion yet it had more users and earnings compared to Open Ai.

There is a standard simple way to bet against doomsday. You tell Eliezer that you'll give him $X today, and he'll agree that if he's still alive in say, 2040 that he'll give you back $10X.

He would never accept such a bet. He strongly believes there is a very high likelihood of AI destroying the world. If he's right, he stands to gain no upside from such a bet, and I obviously would not have to honor it either.

Well the idea is that he might still have some use for that money in what he estimates to be his few remaining years.