site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

someone in the HN thread reminded me of this again, and I remembered I didn't remember the entire story here. part of the thing with reddit was not only did Sam Altman engineer Conde Nast into being a minority stakeholder and by helping to manufacture a bunch of leadership crisises at reddit. if you're not familiar with this, here's Yishan, a former CEO of reddit, saying exactly this in a manner that is second only in wink wink nudge nudge to If I Did It.

Here's one.

In 2006, reddit was sold to Conde Nast. It was soon obvious to many that the sale had been premature, the site was unmanaged and under-resourced under the old-media giant who simply didn't understand it and could never realize its full potential, so the founders and their allies in Y-Combinator (where reddit had been born) hatched an audacious plan to re-extract reddit from the clutches of the 100-year-old media conglomerate.

Together with Sam Altman, they recruited a young up-and-coming technology manager with social media credentials. Alexis, who was on the interview panel for the new reddit CEO, would reject all other candidates except this one. The manager was to insist as a condition of taking the job that Conde Nast would have to give up significant ownership of the company, first to employees by justifying the need for equity to be able to hire top talent, bringing in Silicon Valley insiders to help run the company. After continuing to grow the company, he would then further dilute Conde Nast's ownership by raising money from a syndicate of Silicon Valley investors led by Sam Altman, now the President of Y-Combinator itself, who in the process would take a seat on the board.

Once this was done, he and his team would manufacture a series of otherwise-improbable leadership crises, forcing the new board to scramble to find a new CEO, allowing Altman to use his position on the board to advocate for the re-introduction of the old founders, installing them on the board and as CEO, thus returning the company to their control and relegating Conde Nast to a position as minority shareholder.

JUST KIDDING. There's no way that could happen.

https://old.reddit.com/r/AskReddit/comments/3cs78i/comment/cszjqg2/

this seems similar to what ended up essentially happening at OpenAI, although it's over board seats rather than stake in the company.

My story: Maybe they had lofty goals, maybe not, but it sounded like the whole thing was instigated by Altman trying to fire Toner (one of the board members) over a silly pretext of her coauthoring a paper that nobody read that was very mildly negative about OpenAI, during her day job. https://www.nytimes.com/2023/11/21/technology/openai-altman-...

And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.

So they allied with Helen to countercoup Greg/Sam.

I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.

https://news.ycombinator.com/item?id=38386365

I imagine this HN commenter is right and at the end of the day this comes down to capitalism.

That's a misreading of the situation. The employees saw their big bag vanishing and suddenly realised they were employed by a non-profit entity that had loftier goals than making a buck, so they rallied to overturn it and they've gotten their way. This is a net negative for anyone not financially invested in OAI.

https://news.ycombinator.com/item?id=38376123

this is probably not news to themotte, but it also seems pretty evident to me that the nonprofit's goals were wholly unimportant to those working there. whether you like openai or not1, the name was and is a punching bag essentially because it's neither open nor ai. the weird structure seemed to those working there probably just was seen as tax evasion (sorry, avoidance) and that aforementioned rubber stamp.

but that's the way the cookie crumbles. Larry Summers of all people2 being added to the board is darkly hilarious though. it's basically taking off the mask.

1. I don't particularly care one way or another about them as I don't use their stuff nor plan to.

2. this part is more unrelated snark so i'm leaving it to a footnote, but he's a great measure for economists. he managed to predict that 3 contradictory things were going to happen with regards to inflation and none of those 3 things happened.

The board lost because a bunch of dweebs who are improbably being paid in many millions of dollars of stock they’d theoretically cash in in the event of an IPO or sale to Microsoft for $200bn rioted because firing Altman jeopardized their big payday. There is no depth above or below this level.

I think Matt Levine had the correct take in his latest Money Stuff. The basic idea is there's some natural tension between the board of OpenAI and its employees/investors. The board is committed to the non-profit mission of building safe AI while the employees and investors want to build a commercially viable product that turns their equity into big piles of money. There can naturally be some tension between these things!

While the board has a certain formal legal power over the employees and investors it cannot actually accomplish much without them. So the employees and investors have a great deal of informal power. Currently it seems like the balance of power is on the "make big piles of money" side and this will probably be more true after the restructuring.

I think one outcome here is the IRS should probably revoke OpenAI's charitable status. It is hard for me to take the idea they are a charitable organization seriously when the CEO of the for-profit subsidiary can overrule the board to which he ostensibly reports in order to make more money.

This sort of outcome is what makes it very, very difficult for me to take the AI doomerism seriously. Yes, we may get Paperclip Maximiser AGI, but I think it's much more likely to come about by "humans in notional charge think it will make them trillions and so follow blindly its advice" than "machine becomes agent and decides on its own goals". I have no belief in Fairy Godmother AGI that will make every single human on the planet (and that means every single human, not simply 'coastal cities PMC types') rich and happy forever and ever because it magically figured out workarounds to bypass the physical limits of the natural world to give us free energy and infinite resources.

Some people are going to get very, very rich off this, the rest of us? Survival, scrabbling, or gig economy as right now.

The theoreticists about alignment were doomed from the start, since in reality it was never going to work out how they hoped. 'Let's write heartful letters about the dangers of AI and the need to slow down research in order to avert the danger to humanity" - yes, and who signed off on that letter? One Mr. Sam Altman. I read a claim in a news article that this kind of 'support' by corporations etc. was all about positioning themselves as the ones to be first to market and making it difficult or impossible for smaller, newer start-ups to rival them, and not at all about the ostensible 'threat to humanity'.

And I think we see this working out in real time right now. The pro-safety faction within OpenAI moved against Altman, due (it is being speculated) to fears that he was too much on the "get a product out to market, and to heck with the cautious safety-first approach" train. This hit Microsoft's share price, and now Altman is back (for the moment, anyway) and it's a safe bet that OpenAI will now be moving ahead with enabling Microsoft to gain first mover advantage by having their pet AI widely available commercially.

OpenAI's real function, even if the idealists on the board didn't realise it, was to provide the necessary reassurance for the regulators and government: "yes indeed, we are ticking all the safety boxes, no worries!" That's why Altman scolded Toner for her paper; it didn't matter if it was only really followed by nerds, it was not doing her job which was to help sell OpenAI as the bestest safest no need for interference by the government while we develop the product.

This sort of outcome is what makes it very, very difficult for me to take the AI doomerism seriously. Yes, we may get Paperclip Maximiser AGI, but I think it's much more likely to come about by "humans in notional charge think it will make them trillions and so follow blindly its advice" than "machine becomes agent and decides on its own goals".

I'm not sure I follow your logic here.

You don't take AI doomerism seriously because you think that AI doom is likely but through a different path than the 'paperclip maximizer'? I'm pretty certain that the AI safety crowd are just as worried about manipulative oracle AIs as they are about mindless paperclip maximizers.

I think the problem is not machines, but people. And if people blindly put their trust in "the machine output must be correct" because they have visions of dollar signs, then we're screwed, but it's not the AI that decided to screw us, it's the people who were doing things in accordance with what the AI said.

Right now, OpenAI - which has its lovely charter about safety and so forth - is trashed. It wasn't their AI that suddenly woke up and became an agent with goals of its own that did it, it was good old-fashioned greed. Getting rid of Altman was supposed to slow down the adoption of unaligned or insufficiently aligned AI. It was perceived as costing money, and so money won in the end.

Sounds like you still agree with us doomers? We don't expect human greed / competitive pressures to go away any time soon, which is why we're worried about exactly the kinds of money-winning scenarios you propose.

How exactly are the coastal PMC types going to get rich in a way that doesn't enrich the rest of us?

If AGI can manufacture goods much more cheaply, then that means cheap goods for everyone. If AGI can provide services for zero to low cost, that means cheap or free services for everyone.

While there are situations where individuals can get rich at the expense of the masses through rent-seeking (I'm thinking someone like Carlos Slim monopolising Mexican telecoms) the overwhelming majority of billionaires got that way by providing something useful to the masses. Elon Musk sold luxury electric cars, Jeff Bezos provided an online retail experience far superior to anything that came before it, Steve Jobs sold consumer-friendly, well-designed electronics.

If Sam Altman ends up a trillionaire, how exactly could that leave the rest of us poorer?

If Sam Altman ends up a trillionaire, how exactly could that leave the rest of us poorer?

Bezos, Musk, etc. have fortunes but that money is not making its way to me. I have even less reason to think that Altman as a trillionaire is going to mean the lake fisherman in Tanzania suddenly getting thousands of dollars extra per year as a wage. Cheap goods/services for everyone is a nice-sounding idea, but it relies on "I have enough money as disposable income to purchase those goods/services". If I lose my job because the company replaced me with AI, it doesn't matter how 'cheap' the next model iPhone is because now it's made by AI, I'm not going to be buying one.

Bezos, Musk, etc. have fortunes but that money is not making its way to me.

To the degree that they have cash it is making it's way to you about as much as any other cash, as for their wealth, that they created, they did so by creating lots of utility. I can order something online and have it the very same day. That's awesome. Thanks Jeff. You deserve all that money for doing something so awesome. You earned it! Thanks Elon for the cool cars and internet!

These are things people want (we know this because people pay them for these things).

If you've ever used Amazon then you have benefitted from Bezos' success. You've benefitted from the consumer surplus generated by Amazon's existence. Whether that is from cheaper goods, faster delivery, greater choice, more convenience, the fact that you've used the website demonstrates that you've derived value from doing so relative to what else was available. The same goes for any other company that you've ever interacted with.

And if everyone's jobs get replaced by AI without any financial recompense, then nobody will have any money to spend on these companies that have done the job-replacing. They would need to compete with eachother for what small purchasing power remains, which means lowering prices to near-zero. This is easy enough when your labour costs have been reduced to zero by the AI that took everyone's jobs.

AI represents a potential increase in productivity, and increasing productivity is literally what economic growth is. From the industrial revolution to now, increasing productivity is why we were able to escape the zero-sum world that existed before.

Whether it destroys the world is another thing, of course.

Just because there are no humans in the loop purchasing or selling goods and services doesn't mean that companies are out of luck, they'll merely sell to each other, with a fully automated economy akin to a Disneyland with no children.

We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.

Automated Tesla sells electric vehicles and batteries to companies providing transportation to automated mining companies that sell ores to automated refining and manufacturing companies that sell it to someone else. There doesn't need to be any humans involved anywhere, barring those who own a stake in such entities, and the loss of human purchasing power from automation will mean fuck-all.

And if everyone's jobs get replaced by AI without any financial recompense, then nobody will have any money to spend on these companies that have done the job-replacing. They would need to compete with eachother for what small purchasing power remains, which means lowering prices to near-zero.

Why would they need to sell goods to people with no purchasing power... or combat power considering being a soldier is also a job? Economics ends if there is no scarcity.

Why wouldn't they build giant theme parks for themselves, or clone and cater to themselves, or run off to explore space, or have fun in VR in a giant underground fortress guarded by robots? I believe that power corrupts, that absolute power corrupts absolutely. A world where one or a few men control all wealth and power is not going to be good for those without wealth or power.

If Sam Altman ends up a trillionaire, how exactly could that leave the rest of us poorer?

I can think of a few ways.

Fast and constant inflation absorbing the productivity gains of technology into asset prices.

AI making society super productive but a loaf of bread being 10 bucks and only the richest being able afford land. You'll own nothing and you'll be happy with UBI in exchange of guarantees of control, which is the model of Altman's other venture, Worldcoin.

A rising tide lifts all boats in a free market. We do not live in one.

AI making society super productive but a loaf of bread being 10 bucks

If the current marginal cost of production for a loaf of bread is about $2 (just looked at the website of the closest grocery store to my current location), and AI makes society super productive, do you think the real marginal cost of production for a loaf of bread will be (a) Less than $2, (b) About $2, (c) Greater than $2, but less than $10, (d) About $10, or (e) Greater than $10?

If you chose one of (a-c), why do you think that the price of bread will not trend toward the marginal cost of production, as is the standard result in economics for goods like bread?

If you chose one of (b-e), why do you think that an across-the-board increase in productivity will not reduce the marginal cost of production of bread?

This appears to be pro-gold/pro-bitcoin. But in a lot of those graphs, you can just as easily pick ‘81, then you have the sinking interest rates as the nice correlation. The fed ordered that assets be more expensive for 40 years, and people wonder why labour isn’t getting its share.

I'm aware there's also that narrative going, just provided the Austrian side (which would actually agree with that assessment of the Fed's policy) but Marxists are also quick to point out that productivity gains don't go to the workers but to capital. They point to different causes, but they too have a possible story for capitalism not lowering inequality. With or without using surplus value as a framework.

In any case I don't think the assumption that productivity gains make everyone wealthy necessarily is warranted.

This sort of outcome is what makes it very, very difficult for me to take the AI doomerism seriously. Yes, we may get Paperclip Maximiser AGI, but I think it's much more likely to come about by "humans in notional charge think it will make them trillions and so follow blindly its advice"

This is moving the goalposts to a distant planet.

I've heard before the criticism of "AI doom is not certain, therefore we shouldn't worry about it". I've never heard before "One type of AI doom is less likely than another type, therefore we can't take people who worry seriously".

You're missing my point. AI doomerism is about a certain undesirable outcome. Meanwhile, in reality, the rug is being pulled out from under them in a way that has nothing to do with "what if we develop an AGI that becomes self-aware and recursively boosts its intelligence to god-tier levels, without being aligned with liberal West coast values?" but rather "what if you threaten our return on investment, that means we must act to stop you".

The power has now switched from "the people who worry about alignment" to "the people who can guarantee a product to market". So if we are going to be doomed by AGI, the alignment people were working on the wrong problem all along - they needed to be converting their "profit first" colleagues and investors to "safety first" and it's pretty clear they didn't see it coming until too late, and when they tried to avert that by firing Altman, now they're the ones being shown the door.

The Board was in a lose-lose situation, either they could let Altman continue unchecked, or pick a fight they'd lose.

He's already consolidated personal power and has the loyalty of most of the employees, even if I'm sure many of them are pissed at their equity going up in smoke, and the Board, able to exercise power only through hiring and firing CEOs, was powerless to do much about it. They only had the option to C4 a problem where the occasional use of a scalpel might have helped.

The whole thing is both a farce and a tragedy, I have no confidence anymore that anybody will be able to halt the deployment of an AGI on any grounds, profit motive will continue going brrr. The only saving grace is that Altman is still x-risk pilled, so the potential increase in risk isn't unbounded.

I agree. The big takeaway from this weekend was that humanity as a species is in deep trouble because, when the chips are down, people will care more about money than principles.

I also think the criticism of the board is bizarre. They took their shot. They lost, ultimately because the employees didn't want to lose their huge salaries and bonuses. But what is power if you don't try to use it? This wasn't a strategic blunder, it was merely a board that never had any power in the first place.

Had they not taken action, they would have been swept aside anyway.

Perhaps their action will have some residual value in revealing previously hidden information. If anyone had any illusions about OpenAI before, they shouldn't now. We know what OpenAI is - a company that seeks power and money above all else. The news of research breakthroughs coming from OpenAI is deeply troubling.

The board badly fucked up their shot. I think they had a chance at persuading at least a fraction of OpenAI that sam did some things wrong, if indeed he had. A number of OpenAI employees care about AI safety, and some even expressed online that, initially after Altman's firing, they were open to the possibility that it was deserved. But when the board said nothing for days, even to the first and second interim CEOs, that left the employees with nothing to think about but concerns about their future income and whatever Sam told them.

They also could've spent the preceding months building up a case against sam, maybe picking a replacement CEO who the openai team would be amenable to, etc. But they didn't!

We're in agreement on this one. I do think the idealists (including the EA faction which went all-in on X-risk and unaligned AI) are sincere and genuinely want to help humanity, but they're never going to succeed in our current climate.

Between those who are more interested in playing with the technology, moving fast and breaking things and seeing what they hope will be the technocratic future where they get their hearts' desires, and hence ending up accusing any opposition or disagreement as Luddites (also the go-to 'but if we don't do it, China will first, and you don't want China to rule the future, do you?' appeal), and the good old profit motive (by working to halt AI progress I am working against my own interests since that will make the equity go up in smoke, hence despite ostensibly being pro-safety and pro-slowdown, I'm not really) - begging for a moratorium has a snowball in Hell's chance of actually being listened to, never mind taken seriously.