site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

someone in the HN thread reminded me of this again, and I remembered I didn't remember the entire story here. part of the thing with reddit was not only did Sam Altman engineer Conde Nast into being a minority stakeholder and by helping to manufacture a bunch of leadership crisises at reddit. if you're not familiar with this, here's Yishan, a former CEO of reddit, saying exactly this in a manner that is second only in wink wink nudge nudge to If I Did It.

Here's one.

In 2006, reddit was sold to Conde Nast. It was soon obvious to many that the sale had been premature, the site was unmanaged and under-resourced under the old-media giant who simply didn't understand it and could never realize its full potential, so the founders and their allies in Y-Combinator (where reddit had been born) hatched an audacious plan to re-extract reddit from the clutches of the 100-year-old media conglomerate.

Together with Sam Altman, they recruited a young up-and-coming technology manager with social media credentials. Alexis, who was on the interview panel for the new reddit CEO, would reject all other candidates except this one. The manager was to insist as a condition of taking the job that Conde Nast would have to give up significant ownership of the company, first to employees by justifying the need for equity to be able to hire top talent, bringing in Silicon Valley insiders to help run the company. After continuing to grow the company, he would then further dilute Conde Nast's ownership by raising money from a syndicate of Silicon Valley investors led by Sam Altman, now the President of Y-Combinator itself, who in the process would take a seat on the board.

Once this was done, he and his team would manufacture a series of otherwise-improbable leadership crises, forcing the new board to scramble to find a new CEO, allowing Altman to use his position on the board to advocate for the re-introduction of the old founders, installing them on the board and as CEO, thus returning the company to their control and relegating Conde Nast to a position as minority shareholder.

JUST KIDDING. There's no way that could happen.

https://old.reddit.com/r/AskReddit/comments/3cs78i/comment/cszjqg2/

this seems similar to what ended up essentially happening at OpenAI, although it's over board seats rather than stake in the company.

My story: Maybe they had lofty goals, maybe not, but it sounded like the whole thing was instigated by Altman trying to fire Toner (one of the board members) over a silly pretext of her coauthoring a paper that nobody read that was very mildly negative about OpenAI, during her day job. https://www.nytimes.com/2023/11/21/technology/openai-altman-...

And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.

So they allied with Helen to countercoup Greg/Sam.

I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.

https://news.ycombinator.com/item?id=38386365

I imagine this HN commenter is right and at the end of the day this comes down to capitalism.

That's a misreading of the situation. The employees saw their big bag vanishing and suddenly realised they were employed by a non-profit entity that had loftier goals than making a buck, so they rallied to overturn it and they've gotten their way. This is a net negative for anyone not financially invested in OAI.

https://news.ycombinator.com/item?id=38376123

this is probably not news to themotte, but it also seems pretty evident to me that the nonprofit's goals were wholly unimportant to those working there. whether you like openai or not1, the name was and is a punching bag essentially because it's neither open nor ai. the weird structure seemed to those working there probably just was seen as tax evasion (sorry, avoidance) and that aforementioned rubber stamp.

but that's the way the cookie crumbles. Larry Summers of all people2 being added to the board is darkly hilarious though. it's basically taking off the mask.

1. I don't particularly care one way or another about them as I don't use their stuff nor plan to.

2. this part is more unrelated snark so i'm leaving it to a footnote, but he's a great measure for economists. he managed to predict that 3 contradictory things were going to happen with regards to inflation and none of those 3 things happened.

This sort of outcome is what makes it very, very difficult for me to take the AI doomerism seriously. Yes, we may get Paperclip Maximiser AGI, but I think it's much more likely to come about by "humans in notional charge think it will make them trillions and so follow blindly its advice" than "machine becomes agent and decides on its own goals". I have no belief in Fairy Godmother AGI that will make every single human on the planet (and that means every single human, not simply 'coastal cities PMC types') rich and happy forever and ever because it magically figured out workarounds to bypass the physical limits of the natural world to give us free energy and infinite resources.

Some people are going to get very, very rich off this, the rest of us? Survival, scrabbling, or gig economy as right now.

The theoreticists about alignment were doomed from the start, since in reality it was never going to work out how they hoped. 'Let's write heartful letters about the dangers of AI and the need to slow down research in order to avert the danger to humanity" - yes, and who signed off on that letter? One Mr. Sam Altman. I read a claim in a news article that this kind of 'support' by corporations etc. was all about positioning themselves as the ones to be first to market and making it difficult or impossible for smaller, newer start-ups to rival them, and not at all about the ostensible 'threat to humanity'.

And I think we see this working out in real time right now. The pro-safety faction within OpenAI moved against Altman, due (it is being speculated) to fears that he was too much on the "get a product out to market, and to heck with the cautious safety-first approach" train. This hit Microsoft's share price, and now Altman is back (for the moment, anyway) and it's a safe bet that OpenAI will now be moving ahead with enabling Microsoft to gain first mover advantage by having their pet AI widely available commercially.

OpenAI's real function, even if the idealists on the board didn't realise it, was to provide the necessary reassurance for the regulators and government: "yes indeed, we are ticking all the safety boxes, no worries!" That's why Altman scolded Toner for her paper; it didn't matter if it was only really followed by nerds, it was not doing her job which was to help sell OpenAI as the bestest safest no need for interference by the government while we develop the product.

This sort of outcome is what makes it very, very difficult for me to take the AI doomerism seriously. Yes, we may get Paperclip Maximiser AGI, but I think it's much more likely to come about by "humans in notional charge think it will make them trillions and so follow blindly its advice" than "machine becomes agent and decides on its own goals".

I'm not sure I follow your logic here.

You don't take AI doomerism seriously because you think that AI doom is likely but through a different path than the 'paperclip maximizer'? I'm pretty certain that the AI safety crowd are just as worried about manipulative oracle AIs as they are about mindless paperclip maximizers.

I think the problem is not machines, but people. And if people blindly put their trust in "the machine output must be correct" because they have visions of dollar signs, then we're screwed, but it's not the AI that decided to screw us, it's the people who were doing things in accordance with what the AI said.

Right now, OpenAI - which has its lovely charter about safety and so forth - is trashed. It wasn't their AI that suddenly woke up and became an agent with goals of its own that did it, it was good old-fashioned greed. Getting rid of Altman was supposed to slow down the adoption of unaligned or insufficiently aligned AI. It was perceived as costing money, and so money won in the end.

Sounds like you still agree with us doomers? We don't expect human greed / competitive pressures to go away any time soon, which is why we're worried about exactly the kinds of money-winning scenarios you propose.