site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

someone in the HN thread reminded me of this again, and I remembered I didn't remember the entire story here. part of the thing with reddit was not only did Sam Altman engineer Conde Nast into being a minority stakeholder and by helping to manufacture a bunch of leadership crisises at reddit. if you're not familiar with this, here's Yishan, a former CEO of reddit, saying exactly this in a manner that is second only in wink wink nudge nudge to If I Did It.

Here's one.

In 2006, reddit was sold to Conde Nast. It was soon obvious to many that the sale had been premature, the site was unmanaged and under-resourced under the old-media giant who simply didn't understand it and could never realize its full potential, so the founders and their allies in Y-Combinator (where reddit had been born) hatched an audacious plan to re-extract reddit from the clutches of the 100-year-old media conglomerate.

Together with Sam Altman, they recruited a young up-and-coming technology manager with social media credentials. Alexis, who was on the interview panel for the new reddit CEO, would reject all other candidates except this one. The manager was to insist as a condition of taking the job that Conde Nast would have to give up significant ownership of the company, first to employees by justifying the need for equity to be able to hire top talent, bringing in Silicon Valley insiders to help run the company. After continuing to grow the company, he would then further dilute Conde Nast's ownership by raising money from a syndicate of Silicon Valley investors led by Sam Altman, now the President of Y-Combinator itself, who in the process would take a seat on the board.

Once this was done, he and his team would manufacture a series of otherwise-improbable leadership crises, forcing the new board to scramble to find a new CEO, allowing Altman to use his position on the board to advocate for the re-introduction of the old founders, installing them on the board and as CEO, thus returning the company to their control and relegating Conde Nast to a position as minority shareholder.

JUST KIDDING. There's no way that could happen.

https://old.reddit.com/r/AskReddit/comments/3cs78i/comment/cszjqg2/

this seems similar to what ended up essentially happening at OpenAI, although it's over board seats rather than stake in the company.

My story: Maybe they had lofty goals, maybe not, but it sounded like the whole thing was instigated by Altman trying to fire Toner (one of the board members) over a silly pretext of her coauthoring a paper that nobody read that was very mildly negative about OpenAI, during her day job. https://www.nytimes.com/2023/11/21/technology/openai-altman-...

And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.

So they allied with Helen to countercoup Greg/Sam.

I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.

https://news.ycombinator.com/item?id=38386365

I imagine this HN commenter is right and at the end of the day this comes down to capitalism.

That's a misreading of the situation. The employees saw their big bag vanishing and suddenly realised they were employed by a non-profit entity that had loftier goals than making a buck, so they rallied to overturn it and they've gotten their way. This is a net negative for anyone not financially invested in OAI.

https://news.ycombinator.com/item?id=38376123

this is probably not news to themotte, but it also seems pretty evident to me that the nonprofit's goals were wholly unimportant to those working there. whether you like openai or not1, the name was and is a punching bag essentially because it's neither open nor ai. the weird structure seemed to those working there probably just was seen as tax evasion (sorry, avoidance) and that aforementioned rubber stamp.

but that's the way the cookie crumbles. Larry Summers of all people2 being added to the board is darkly hilarious though. it's basically taking off the mask.

1. I don't particularly care one way or another about them as I don't use their stuff nor plan to.

2. this part is more unrelated snark so i'm leaving it to a footnote, but he's a great measure for economists. he managed to predict that 3 contradictory things were going to happen with regards to inflation and none of those 3 things happened.

The Board was in a lose-lose situation, either they could let Altman continue unchecked, or pick a fight they'd lose.

He's already consolidated personal power and has the loyalty of most of the employees, even if I'm sure many of them are pissed at their equity going up in smoke, and the Board, able to exercise power only through hiring and firing CEOs, was powerless to do much about it. They only had the option to C4 a problem where the occasional use of a scalpel might have helped.

The whole thing is both a farce and a tragedy, I have no confidence anymore that anybody will be able to halt the deployment of an AGI on any grounds, profit motive will continue going brrr. The only saving grace is that Altman is still x-risk pilled, so the potential increase in risk isn't unbounded.

I agree. The big takeaway from this weekend was that humanity as a species is in deep trouble because, when the chips are down, people will care more about money than principles.

I also think the criticism of the board is bizarre. They took their shot. They lost, ultimately because the employees didn't want to lose their huge salaries and bonuses. But what is power if you don't try to use it? This wasn't a strategic blunder, it was merely a board that never had any power in the first place.

Had they not taken action, they would have been swept aside anyway.

Perhaps their action will have some residual value in revealing previously hidden information. If anyone had any illusions about OpenAI before, they shouldn't now. We know what OpenAI is - a company that seeks power and money above all else. The news of research breakthroughs coming from OpenAI is deeply troubling.

The board badly fucked up their shot. I think they had a chance at persuading at least a fraction of OpenAI that sam did some things wrong, if indeed he had. A number of OpenAI employees care about AI safety, and some even expressed online that, initially after Altman's firing, they were open to the possibility that it was deserved. But when the board said nothing for days, even to the first and second interim CEOs, that left the employees with nothing to think about but concerns about their future income and whatever Sam told them.

They also could've spent the preceding months building up a case against sam, maybe picking a replacement CEO who the openai team would be amenable to, etc. But they didn't!

We're in agreement on this one. I do think the idealists (including the EA faction which went all-in on X-risk and unaligned AI) are sincere and genuinely want to help humanity, but they're never going to succeed in our current climate.

Between those who are more interested in playing with the technology, moving fast and breaking things and seeing what they hope will be the technocratic future where they get their hearts' desires, and hence ending up accusing any opposition or disagreement as Luddites (also the go-to 'but if we don't do it, China will first, and you don't want China to rule the future, do you?' appeal), and the good old profit motive (by working to halt AI progress I am working against my own interests since that will make the equity go up in smoke, hence despite ostensibly being pro-safety and pro-slowdown, I'm not really) - begging for a moratorium has a snowball in Hell's chance of actually being listened to, never mind taken seriously.