This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
So I just ate an automated 3-day reddit ban for saying we should bomb the tigrayan militants responsible for their genocidal strategy of raping and genitally mutilating women. I can't really complain about that: I was knowingly in violation of reddit's "no advocating violence" policy. I have been before, and I will be again, probably until I get permabanned, because sometimes violence is the solution. Thomas Aquinas will back me up there.
But what's interesting to me is the "automated" part. Now, I've faced my fair share of human disciplinary action before. Sometimes it's fair, sometimes its not. But either way, the humans involved are trying to advance some particular ideological goal. Maybe they blew up because Ii contradicted their policies. Maybe they developed a nearly autoimmune response to any kind of NSFW post becauseof prior calamities. (Looking at you, spacebattles mods.) Maybe they genuinely wanted to keep the local standard of discussion high. But reddit's automated system is clearly not designed for any of that. Rather, its most fundamental goal seems to be the impartial and universal enforcement of reddit's site-wide rules to the best of its capability.
I agree with yudkowsky on the point that an "aligned" AI should do what we tell it to do, not what is in some arbitrary sense "right." So I'm also not going to complain about how "cold and unfeeling AI can't understand justice." That would be missing the the forest for the trees. It's not that AI aren't capable of justice, it's that the reddit admins didn't want a just AI. They wanted, and made, a rule-following AI. And since humans created the rules, by their impartial enforcement we can understand what their underlying motivations actually are. That being, ensuring that reddit discussions are as anodyne and helpful as possible.
Well, really it's "make as much money as possible." But while AI are increasingly good at tactics-- at short tasks-- they're still very lacking at strategy. So reddit admins had to come up with the strategy of making anodyne discussions, which AI's could then implement tactically.
The obvious question is: "why?" To which the obvious response is, "advertisers." And that would be a pretty good guess, historically. Much of reddit's (and tumblr's, and facebook's, and pre-musk twitter's) policy changes have been as a result of advertisers. But for once, I think it's wrong. Reddit drama is at a low enough ebb that avoiding controversy doesn't seem like it should be much of a factor, and this simultaneously comes as a time where sites like X, bluesky, and TikTok are trying to energize audiences by tacitly encouraging more controversy and fighting.
Which brings me to my hypothesis: that reddit is trying to enhance its appeal for training AI.
Everyone knows that google (and bing, and duckduckgo, and yahoo) have turned to shit. But reddit has retained a reputation for being a place to find a wide variety of useful, helpful, text-based content. That makes it a very appealing corpus on which to train AI-- and they realized that ages ago, which lead to them doing stuff like creating a paid API. This automated moderation style isn't necessarily the best for user retention, or getting money through advertisement, but it serves to pre-clean the data companies can feed to AI. It's sort of an inverse RLHF. RLHF is humans trying to change what response strategies LLMs take by making tactical choices to encourage specific behaviors. Reddit moderation, meanwhile is encouraging humans to come up with strategic adaptations to the tactical enforcement of inoffensive, helpful content. And remember what I said about humans still being better at strategy? This should pay out massive dividends in how useful reddit is as training data.
Coda:
As my bans escalate, I'm probably going to be pushed off reddit. That's probably for the best; my addiction to that site has wasted way too much of my life. But given the stuff I enjoy about reddit specifically-- the discussions on wide-ranging topics-- I don't see replacing reddit with X, or TikTok, or even (exclusively) the motte. Instead, I'm probably going to have to give in and start joining discord voicechats. And that makes me a little sad. Discord is fine, but I can't help but notice that I'm going dow the same path that so many repressed 3rd worlders do and resorting to discussion on unsearcheable, ungovernable silos. For all the sins of social media, it really does-- or at least did serve as a modern public square. And I'll miss the idea (if not necessarily the reality) that the debates I participated in could be found, and heard, by a truly public audience.
Why did it take 20 years for Reddit to turn a profit? Looking at another heavily moderated forum in the past Twitter! How often did it turn a profit? Why did these companies keep on getting funding at ridiculous valuations? Maybe it is a way of doing sentiment engineering at scale through various behavior modification tricks with Likes, upvotes, retweets. Maybe that was the purpose? Not turn a profit but to modify behavior to do social engineering, maybe that is more valuable to the owners?
Controlling the minds of normies is extremely valuable. Elon Musk didn’t buy Twitter for the money. He bought it to use it as a mouthpiece and more importantly to keep it from being used against him
This 1000 times is why I despise social media. Nobody is getting real conversation on social media because it’s curated to funnel your mind down a path leading to the pre-approved opinion. I mean propaganda is so pervasive in the modern west that I think we’re as bad or worse in terms of propaganda and psychological manipulation than the worst totalitarian regimes of the last century. Stalin put out propaganda, sure, but it wasn’t nearly as pervasive as what we have. He had radio, newspapers, and posters. He couldn’t steer private conversations, he couldn’t delete crime-think from social consciousness. He could chill things by arresting obvious and loud dissenters, but that is much more limited than what social media does via AI and deletion. Our propaganda machine hides and people are lead to believe that they are having neutral conversations.
I think this is an least partly overselling our AI panopticon overlords. This might be true in online spaces, but those aren't everything, and even then offshoots of sites challenging moderation policies are common (Bluesky, Truth Social). And they have almost no power over IRL discussions and actions -- despite attempts made a decade ago, seem to have overreached and receded. To hear Reddit tell it, there basically aren't any Republicans anywhere in the US, and nobody shops at Hobby Lobby. And there are people that cloister themselves to the extent they believe this, but as it turns out the levers of political power aren't particularly beholden to Reddit
dog walkersmods.It’s not just social media, but regular media, education and control mechanisms like the ability for you to be fired for saying something online, or convincing others to shun friends and even family who say things that the regime doesn’t like. Americans are saturated in propaganda and unless you’re paying attention you probably don’t even notice it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link