This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
NYT: Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding
The NYT scooped everybody. We finally know why Sam Altman was fired:
There are a few other minor issues mentioned in the article, but this sounds like the big one. Rationalist/EA types take being told that they can't criticize "allies" in public very negatively, a position I am quite sympathetic to. Helen Toner works at an Open Philanthropy-funded think tank, so she's as blue blood an effective altruist as they get. My guess is that this was the moment that she decided that Sam had to be eliminated before he took control of the board and jeopardized OpenAI's mission.
What gets me is how disingenuous this makes the original firing announcement: "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." It sounds like he was perfectly candid. They just didn't like what he was about.
In completely unrelated news, ChatGPT has been down for the last three hours.
The lack of candour may have referred to this or to things not reported on in the article.
Luckily, a brand new article just dropped with details about that:
Not entirely related, but here's a particularly eye-popping quote:
The link doesn't work for me - maybe this is explained elsewhere in the article, but going solely on the excerpt...
Horseshit. "Oh he was lying but we can't give you any examples because he's that good at lying" is the kind of excuse I would expect from a four year old, not a group of supposedly intelligent and qualified professionals. At this point I think that unless they actually give us the specifics, this all boils down to the GPT marketplace blowing up Poe and making a boardmember unreasonably angry.
Sam Altman is a real business shark whose literal job for the last twelve years has been dealing with boards of directors and VC investors. Running circles around a shape-rotator like Sutskever is child's play for him. Running circles against an ivory tower researcher like Toner is easy for him. McCauley doesn't strike me as a serious contender for someone who successfully wrestled Reddit away from Conde Nast either.
And, tellingly, only D'Angelo managed to remain on the board of directors after Altman got his way.Scratch that, I have no idea how D'Angelo managed to survive the debacle.It's not like you even have to be an experienced business shark to out-argue people who say "hey employees, you know what, I know that we can all become ridiculously rich in the next couple of years, but guys... guys... AI might destroy humanity at some point so let's not become ridiculously rich".
Trying to stop people from developing AI is like trying to stop people from developing nuclear weapons. Obviously, having nuclear weapons gives one enormous benefits. So the idea that someone could talk the whole world out of trying to get nukes by just using intellectual arguments is absolutely ludicrous.
Imagine starting a company called "OpenNuclear". "Let's develop nuclear technology in a safe way, for the benefit of all humanity". And then expecting that somehow the world's talented engineers will just go along with your goal of nuclear safety, instead of going to work building nuclear weapons for various organizations for huge salaries and/or because of powerful emotional reasons like "I don't want my country to get attacked". I can't think of any example in history of humanity as a whole refusing to develop a powerful technology. Even if somehow the world temporarily agreed to pause AI research, that agreement would probably be dropped like a hot potato the second some major war broke out and both sides realized that AI could help them.
But the world did that with Atoms for Peace:
https://en.wikipedia.org/wiki/Atoms_for_Peace
See also the International Atomic Energy Agency and the Treaty on the Non-Proliferation of Nuclear Weapons. Countries like Japan have highly developed nuclear industry, but they don’t have nuclear weapons.
Because when America occupied them after the war, it made damn sure Japan would never again get any notions about being a military power. It's why their military is known as the Japanese Self-Defense Forces:
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link