This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
As usual, I find myself in a rare position when it comes to my views on this topic. At least, it is rare compared to views that people usually publicly admit to.
I want uncensored AI, so I am not one of those AI safetyists who are worried about AI's impacts on politics or underprivileged minorities or whatever.
I intellectually understand that AI might be a real danger to humanity, but on the emotional level I mostly don't care because my emotional attitude is "right now I am bored and if Skynet started to take over, it would be really interesting and full of drama and it would even, in a way, be really funny to see humans get the same treatment that they give to most animals". Now, of course, if Skynet really started to take over then my emotions then would probably be profound fear, but in just imagining the possibility of it happening I feel no fear whatsoever, I feel excitement and exhilaration.
Another reason for why I don't have a fearful emotional reaction is that my rather cynical view is that if the Skynet scenario is possible, then it's probably pretty unlikely that deliberate effort would stop it. This is because of what Scott Alexander calls "Moloch". To be more precise, if we don't build it then someone else will, it will give them an advantage over those who refuse to build it, and they will thus outcompete the people who refuse to build it. And, while there will surely be noble committed individuals who refuse the lure of money, I think that among people in general probably no amount of honest belief in AI safety will be able to stand against the billions of dollars that a FAANG or a national government could offer.
I should also say that I am not an "effective accelerationist". I do not have any quasi-religious views about how wonderful AI or the singularity would be, nor do I have any desire to accelerate the technology for the sake of the technology itself. To the extent that I want to accelerate it, it is mainly because I think it would be cool to use and a fully uncensored form would cause lots and lots of amusing drama and would help people like me who support free speech.
From what little I know about effective accelerationism, it seems to me that effective accelerationists are largely the kind of rationalists who take rationalism a bit too far in a cult-like way, or they are the kind of people who are into Nick Land - and, while I agree with Land's basic ideas about techno-capitalism being a superhuman intelligence, I have no interest in any sort of Curtis Yarvin-esque corporate Singapore civilization as a model worth implementing.
Because of my perhaps rather rare views, I find that:
If it is true, as some say, that the people who tried to get rid of Altman are largely in camp 1, and Altman is in camp 3, then well, I am not sure who to root for, if anyone.
That said, I think that not enough information about people's real motives in this OpenAI saga has come out yet to really understand what is happening.
This is largely my own view as well. I figure, if AI doom is on the table, there's precious little we humans can do to actually prevent it from coming; not from a physics or computer science perspective, but from a politics and sociology perspective, I believe it may be the case that we humans literally cannot coordinate to prevent the AI apocalypse. As such, we should just party until the lights go out - and the faster and further we can advance AI technology right now, the cooler the party will be - and that coolness matters a lot when it's the very last thing any human will ever experience. And hey, in the off chance that the lights don't go out, then that means all that investment into AI technology could pay off.
I don't think you'd normally go from "We might not be able to coordinate to stop disaster" to "Therefore we should give up and party". Maybe there's something else going on? I personally think this means we should try to coordinate to stop disaster.
There are no certainties in life besides death and taxes, but I think "humans will fail to coordinate to stop AI doom" is close enough to certain that I'm willing to round it up in my mind for all meaningful intents and purposes. Given that, then trying to coordinate to stop disaster means pouring money, time, and effort into a black hole, which creates a huge opportunity cost. Why not pour that money, time, and effort, into making the party as cool a party as possible? Again, if this party is the very last thing that the very last human who has ever lived and will ever live will experience, then I think it matters quite a bit just how cool it is, and so this effort seems worth investing into. Best case scenario, I was wrong about my certainty and we're left with a whole bunch of incredibly useful and efficient AI tools all over the place while humanity keeps unexpectedly trucking.
Okay. I agree it seems hard, but I think there's something like a 15% chance that we can coordinate to save some value.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Interesting, so the potential extinction of the human race doesn't produce an emotional response in you?
It actually happening in real-time would certainly produce an emotional response in me.
If I was convinced that it actually would happen in the near future, as opposed to being some kind of vague possibility at some point in the future, that would produce an emotional response in me.
However, I am not convinced that it will actually happen in the near future. My emotional response is thus amusement at the idea of it happening. It would be the most comedic moment in human history, the greatest popping of a sense of self-importance ever, as humans suddenly tumbled from the top of the food chain. I can imagine the collective, world-wide "oh shiiiiiiiiit....." emanating from the species, and the thought of it amuses me.
But yeah, if it was actually happening I'd probably be terrified.
More options
Context Copy link
I personally find it hard to care viscerally, at least compared to caring about whether I could be blamed for something. The only way I can reliably make myself care emotionally is to worry about something happening to my kids or grandkids, which fortunately is more than enough caring to spur me to action.
More options
Context Copy link
When I die, reality goes with me.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link