site banner

Culture War Roundup for the week of November 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

OpenAI announces leadership transition

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

A member of OpenAI’s leadership team for five years, Mira has played a critical role in OpenAI’s evolution into a global AI leader. She brings a unique skill set, understanding of the company’s values, operations, and business, and already leads the company’s research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”

I posted this in Twitter and someone speculated that it's because Altman paused subscriptions on Tuesday, but that would alone seem like a pretty inconsequential reason for this sort of a major move.

This is pretty crazy, and it will be interesting to see people's takes on WHY this happened, from the unlikely (he was too woke), to the more likely (he wasn't woke enough), to the pants-shitting scary (researchers disturbed a Balrog and AGI is here).

One thing I still don't understand is how Open AI is simultaneously a 5013C non-profit, half owned by Microsoft, and hoping to raise more money at an 86 billion valuation.

Am I wrong for being hopeful that Elon Musk seems to be going for a second try into the AI space? I'm honestly a little worried about the direction Open AI is taking lately, going headlong into AGI research while being hypervigilant about woke microaggressions.

I'm honestly a little worried about the direction Open AI is taking lately, going headlong into AGI research while being hypervigilant about woke microaggressions

I'm not so sure I am, it seems to severely hamper the performance of their model. If they keep going down this path they will be overtaken.

I figure that, in addition to the likelihood that many of the people who are working on the tech are genuinely sensitive to the possibility that their work might harm people, and many others have pro-censorship political leanings, also it's just that when people almost start inevitably start using the AI to generate stuff like child porn and psychological advice that causes them to kill themselves, there will be such a giant shitstorm in the media and among the populace that companies are really scared to get too close to that scenario.

The fires are already being lit:

The IWF report reiterates the real world harm of AI images. Although children are not harmed directly in the making of the content, the images normalise predatory behaviour and can waste police resources as they investigate children that do not exist.

In some scenarios new forms of offence are being explored too, throwing up new complexities for law enforcement agencies.

For example, the IWF found hundreds of images of two girls whose pictures from a photoshoot at a non-nude modelling agency had been manipulated to put them in Category A sexual abuse scenes.

The reality is that they are now victims of Category A offences that never happened.

The definition of a victimless crime if I've ever heard one.

As I've linked before, there is evidence showing that porn availability is associated with a decrease in sexual abuse rates/no relationship, not an increase/normalization:

https://journals.sagepub.com/doi/full/10.1177/1524838020942754?journalCode=tvaa

I strongly expect that the same is true for child pornography.

At any rate, since no real people were harmed, I see no reason to get worked up over it, but then again, even the normies know that "think of the children" is often a ploy to enact irrational feel-good policies.

Not only do you get to use "think of the children", you also get to partake in socially-approved hate for a group of weirdos for their innate characteristics. Humans have always had an appetite for doing this, but in modern times there are far fewer acceptable targets.