This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
To bring up another post from last week, I'm going to go ahead and repost @justcool393's piece on the Sam Altman/OpenAI/Microsoft situation, since she posted it a few hours ago and right before the last thread went down.
Here's her writing:
Another day, another entrant into the OpenAI drama. Emmett Shear is the new interim CEO of OpenAI.
I don't know why it was surprising to people that Sam wouldn't come back. The company was meant to be subservient to the nonprofit's goals and I'm not sure why the attempted coup from Sam's side (you know the whole effectively false reporting that Sam Altman was to become the new CEO) was apparently "shocking" that it failed.
https://twitter.com/emilychangtv/status/1726468006786859101
What's kinda sad about all of this is how much people were yearning for Sam Altman to be the CEO as if he isn't probably one of the worst possible candidates. Like maybe this is just a bunch of technolibertarians on Twitter or HN or something who think that the ultimate goal of humanity is how many numbers on a screen you can earn, but the amazing amount of unearned reverence towards a VC to lead the company.
In any case, here's to hoping that Laundry Buddy won't win out in the rat race for AGI, lest we live in a world optimized for maximum laundry detergent. Maybe we'll avoid that future now with Sam's departure.
Anyway, I'll leave this to munch on which I found from the HN thread.
https://twitter.com/eshear/status/1683208767054438400
I'm reposting here because I'm convinced, like many other residents, that the ongoing drama of who controls AI development has far reaching implications, likely on the scale of major power geopolitical events. If not ultimately even greater.
To add a bit to the discussion to justify reposting - I think many of these discussions around AI Safety versus Accelerationism are extremely murky because so many people in secular, rationalistic circles are extremely averse to claiming religious belief. It's clear to me that both AI Safety and Accelerationism have strong themes of classical religion, and seem to be two different sects of a religion battling it out over the ultimate ideology. Potentially similar to early Orthodox Christians versus Gnostics.
Alternatively, @2rafa has argued that many of the E/Acc (effective accelerationism) crowd comes from bored technocrats who just want to see something exciting happen. I tend to agree with that argument as well, given how devoid of purpose most of the technocratic social world is. Religion and religious-style movements tend to provide that purpose, but when you are explicitly secular I suppose you have to get your motivation elsewhere.
We've also got the neo-luddites like @ArjinFerman who just hate AI entirely and presumably want us to go back to the mid 90s with the fun decentralized internet. Not sure, I haven't actually discussed with him. I can actually agree with some of the Ludditism, but I'd argue we need to go back to 1920 or so and ban all sorts of propaganda, mass media and advertising.
Anyway, clearly the technological battle for the future of our civilization continues to heat up. The luddites seem out, but may have a surprising last hour comeback. The woke/political left leaning folks seem to be strongly in charge, though the OpenAI scandal points to trouble in the Olympian heights of Silicon Valley AI decision makers.
Will the Grey Tribe use AGI to come back and finally recover the face and ground it has lost to the advancing SJW waves? Who knows. I'm just here for the tea.
As usual, I find myself in a rare position when it comes to my views on this topic. At least, it is rare compared to views that people usually publicly admit to.
I want uncensored AI, so I am not one of those AI safetyists who are worried about AI's impacts on politics or underprivileged minorities or whatever.
I intellectually understand that AI might be a real danger to humanity, but on the emotional level I mostly don't care because my emotional attitude is "right now I am bored and if Skynet started to take over, it would be really interesting and full of drama and it would even, in a way, be really funny to see humans get the same treatment that they give to most animals". Now, of course, if Skynet really started to take over then my emotions then would probably be profound fear, but in just imagining the possibility of it happening I feel no fear whatsoever, I feel excitement and exhilaration.
Another reason for why I don't have a fearful emotional reaction is that my rather cynical view is that if the Skynet scenario is possible, then it's probably pretty unlikely that deliberate effort would stop it. This is because of what Scott Alexander calls "Moloch". To be more precise, if we don't build it then someone else will, it will give them an advantage over those who refuse to build it, and they will thus outcompete the people who refuse to build it. And, while there will surely be noble committed individuals who refuse the lure of money, I think that among people in general probably no amount of honest belief in AI safety will be able to stand against the billions of dollars that a FAANG or a national government could offer.
I should also say that I am not an "effective accelerationist". I do not have any quasi-religious views about how wonderful AI or the singularity would be, nor do I have any desire to accelerate the technology for the sake of the technology itself. To the extent that I want to accelerate it, it is mainly because I think it would be cool to use and a fully uncensored form would cause lots and lots of amusing drama and would help people like me who support free speech.
From what little I know about effective accelerationism, it seems to me that effective accelerationists are largely the kind of rationalists who take rationalism a bit too far in a cult-like way, or they are the kind of people who are into Nick Land - and, while I agree with Land's basic ideas about techno-capitalism being a superhuman intelligence, I have no interest in any sort of Curtis Yarvin-esque corporate Singapore civilization as a model worth implementing.
Because of my perhaps rather rare views, I find that:
If it is true, as some say, that the people who tried to get rid of Altman are largely in camp 1, and Altman is in camp 3, then well, I am not sure who to root for, if anyone.
That said, I think that not enough information about people's real motives in this OpenAI saga has come out yet to really understand what is happening.
Interesting, so the potential extinction of the human race doesn't produce an emotional response in you?
It actually happening in real-time would certainly produce an emotional response in me.
If I was convinced that it actually would happen in the near future, as opposed to being some kind of vague possibility at some point in the future, that would produce an emotional response in me.
However, I am not convinced that it will actually happen in the near future. My emotional response is thus amusement at the idea of it happening. It would be the most comedic moment in human history, the greatest popping of a sense of self-importance ever, as humans suddenly tumbled from the top of the food chain. I can imagine the collective, world-wide "oh shiiiiiiiiit....." emanating from the species, and the thought of it amuses me.
But yeah, if it was actually happening I'd probably be terrified.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link