site banner

Culture War Roundup for the week of July 21, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

So I just ate an automated 3-day reddit ban for saying we should bomb the tigrayan militants responsible for their genocidal strategy of raping and genitally mutilating women. I can't really complain about that: I was knowingly in violation of reddit's "no advocating violence" policy. I have been before, and I will be again, probably until I get permabanned, because sometimes violence is the solution. Thomas Aquinas will back me up there.

But what's interesting to me is the "automated" part. Now, I've faced my fair share of human disciplinary action before. Sometimes it's fair, sometimes its not. But either way, the humans involved are trying to advance some particular ideological goal. Maybe they blew up because Ii contradicted their policies. Maybe they developed a nearly autoimmune response to any kind of NSFW post becauseof prior calamities. (Looking at you, spacebattles mods.) Maybe they genuinely wanted to keep the local standard of discussion high. But reddit's automated system is clearly not designed for any of that. Rather, its most fundamental goal seems to be the impartial and universal enforcement of reddit's site-wide rules to the best of its capability.

I agree with yudkowsky on the point that an "aligned" AI should do what we tell it to do, not what is in some arbitrary sense "right." So I'm also not going to complain about how "cold and unfeeling AI can't understand justice." That would be missing the the forest for the trees. It's not that AI aren't capable of justice, it's that the reddit admins didn't want a just AI. They wanted, and made, a rule-following AI. And since humans created the rules, by their impartial enforcement we can understand what their underlying motivations actually are. That being, ensuring that reddit discussions are as anodyne and helpful as possible.

Well, really it's "make as much money as possible." But while AI are increasingly good at tactics-- at short tasks-- they're still very lacking at strategy. So reddit admins had to come up with the strategy of making anodyne discussions, which AI's could then implement tactically.

The obvious question is: "why?" To which the obvious response is, "advertisers." And that would be a pretty good guess, historically. Much of reddit's (and tumblr's, and facebook's, and pre-musk twitter's) policy changes have been as a result of advertisers. But for once, I think it's wrong. Reddit drama is at a low enough ebb that avoiding controversy doesn't seem like it should be much of a factor, and this simultaneously comes as a time where sites like X, bluesky, and TikTok are trying to energize audiences by tacitly encouraging more controversy and fighting.

Which brings me to my hypothesis: that reddit is trying to enhance its appeal for training AI.

Everyone knows that google (and bing, and duckduckgo, and yahoo) have turned to shit. But reddit has retained a reputation for being a place to find a wide variety of useful, helpful, text-based content. That makes it a very appealing corpus on which to train AI-- and they realized that ages ago, which lead to them doing stuff like creating a paid API. This automated moderation style isn't necessarily the best for user retention, or getting money through advertisement, but it serves to pre-clean the data companies can feed to AI. It's sort of an inverse RLHF. RLHF is humans trying to change what response strategies LLMs take by making tactical choices to encourage specific behaviors. Reddit moderation, meanwhile is encouraging humans to come up with strategic adaptations to the tactical enforcement of inoffensive, helpful content. And remember what I said about humans still being better at strategy? This should pay out massive dividends in how useful reddit is as training data.

Coda:

As my bans escalate, I'm probably going to be pushed off reddit. That's probably for the best; my addiction to that site has wasted way too much of my life. But given the stuff I enjoy about reddit specifically-- the discussions on wide-ranging topics-- I don't see replacing reddit with X, or TikTok, or even (exclusively) the motte. Instead, I'm probably going to have to give in and start joining discord voicechats. And that makes me a little sad. Discord is fine, but I can't help but notice that I'm going dow the same path that so many repressed 3rd worlders do and resorting to discussion on unsearcheable, ungovernable silos. For all the sins of social media, it really does-- or at least did serve as a modern public square. And I'll miss the idea (if not necessarily the reality) that the debates I participated in could be found, and heard, by a truly public audience.

My story about getting banned by reddit was really strange.

I was on /r/anime participating in a subreddit watch of Re:Zero. The only important thing about that show is the main character has the ability to go back in time by dying, he returns to a set "save point" defined by the plot.

At one point I said " Try to get your sword repaired it's really useful for the small amount of fighting you can do, but more importantly you can use it to Kill yourself" which got me a temp ban on reddit.

Meanwhile this phrase didn't even get a peep

"Get a knife and be ready to stab myself to death if things are going south"

So it must have been a bot.

What was incredible of course is that the /r/anime mods apparently messaged the admins defending me and my post. This is mind you a 13 million user forum yet the mods feel really that strongly about defending users from admins.

IDK why the mods in /r/anime are like this but it's convinced me that it's the best modded subreddit by a large margin and it remains one of the only subreddits I actually use. (that and /r/slatestarcodex)

It's possible - perhaps probable - that you were banned by an AI. Reddit is using LLMs to detect and automatically punish users for "violent" language. So you have to be careful quoting song lyrics, or politicians, or people you don't like, etc. In my experience they've just been warnings but if it was bad enough it might be a short ban.

Oh, I know for a fact that it was an AI. What's interesting to me is the exact nature of the AI. I can trivially imagine designing an AI moderator to actually promote community health; using strictly existing techniques, for example, you could prompt the moderator to consider previous user posts, and also to make public verdicts that can be upvoted/downvoted to influence future behavior on that user. (Like, if the moderator comments are broadly disagreed with, it self-deletes, but if the moderator "notices" it has a history of commenting specific types of well-regarded warnings to a specific user it's more likely to take action.)

But as I explained, reddit clearly isn't trying to improve community health.

Speaking of which, /u/JTarrou, what did you get the mop for?

Oh, a throwaway line about NPR hosts getting flogged.

I suspect my recent comment of the week about race and IQ to be the real culprit, but they got Capone for tax evasion.

Kind of a tangential question but

What’s the source on high IQ people becoming less attractive looking? I’ve only ever read that it’s positive for men and neutral for women

There's disagreement on that, but I'm going with my personal opinion and experience. There's a lot of studies, and if you want to pick your definitions and operationalizations, you can find damn near anything you want. Current meta-studies are saying there's no relationship at all between attractiveness and IQ, or maybe only on the lower end. I don't believe them, in part because I've met Scott (and a couple other geniuses).

I think humans whose genetic expression maximizes any one trait are going to have trade-offs in other areas. Height is correlated with athleticism, to a point. At some height, you can't move properly, so the tallest man in the world never plays basketball. Same thing with geniuses. At the real high reaches of IQ, these people are statistical freaks, and they generally look like it.

To date, I've personally met maybe five or six people smarter than me, and they are all much, much uglier. To the point a few look retarded/disabled. Even beyond the physical stuff you can see in a picture, their mannerisms, twitches and behaviors would be hugely off-putting to most people.

My theory is that attractiveness is generally correlated with IQ, but this horseshoes at the ends of the distribution.

Seems odd, what’s getting “traded off” for higher IQ? My understanding was mutational load is why higher IQ = better looking, as more mutations generally makes you uglier and dumber

As I understand it there is a large cluster of people whose strengths and weaknesses come out to around average. There is a somewhat smaller cluster of people who are dumber, less athletic, uglier, etc than average, but well within normal variations. There is a smaller cluster of people who are about a standard deviation above in every trait, and an even smaller cluster that is more than two standard deviations below in every trait(tards are usually ugly and unathletic to go with it), but no corresponding cluster of people more than two standard deviations above average on every metric. Looking back at the people noticeably smarter than me who I've met, they've been overwhelmingly male so caveat about judging their looks, but their appearances follow the same distribution as everyone else's. The one woman was not very pretty but more of a slightly below average than ugly.

This isn't DnD where you have a set number of character points to spend. Some people get a better hand than others. There are beautiful, highly athletic people with genius level IQ. Not very many of them, but they exist.

To date, I've personally met maybe five or six people smarter than me

You don't get out much I take it?

How ugly do Nobel Prize winners look? I think it's a pretty standard finding that there is only a small positive correlation, but if you look at say top 20% IQ vs. bottom 20% I think it's pretty clear who looks better. (Obesity make this all the more obvious.)

but this horseshoes at the ends of the distribution.

As Yud would put it, the tails come apart.

I don't think being wildly intelligent is negatively correlated with physical attractiveness, the way extreme height is negatively attributed with athleticism, for both reasons of physics and often resulting from a disorder.

Here's why you're probably less smart than you think you are:

Height's relationship to athleticism is a pretty bad example because those are both physical things. Height comes with performance tradeoffs due to physics, and in certain sports that is very apparent. Being extremely tall also tends to come with greater fragility and various health ailments at higher rates because it's "out of spec."

Intelligence and beauty are completely different things. There's no inherent trade offs for the shape of one's face with the performance of one's mind. There's also no reason to believe sexual selection would totally divorce the two.

People also try to believe that being really smart means you also are not as good as various mental things, or have a higher risk of mental health problems.

Which to my knowledge is all bogus cope because most humans don't like to realize that life is actually just unfair and it's not a like a video game with a finite amount of skill points for a character.

In my observations, the median person on the street is far uglier than the median person working (to filter out the obvious confounder of youth if students were considered) at a university. I think any effect to the contrary people notice might just be an artifact of attention - it is easy to ignore the ugly and unremarkable people in everyday life and only notice and remember the beautiful ones, while the exceptionally smart people will be remembered regardless of their appearance.

I think- university employees being a small group that's super confounded- it'd be better to compare freshmen at state flagships to seniors in high school. There's not that big of an age difference and state flagships usually take only the top x%.

People who work at a university aren't nearly smart enough to be on the far side of that slope.

Okay, fine, take the quantitative fields from among the Nobel prize winners vs. some random German environmentalist club (first non-university picture on Google Images found by searching "Bielefeld [group photo]"). Do you actually think the latter look more attractive on average?

(...or are Nobel Prize winners still an insufficiently exclusive bunch? Who is an example of the tendency you are talking about, then?)

Obvious fake is obvious: everyone knows Bielefeld doesn’t exist. Try again with a real German town.

There's disagreement on that, but I'm going with my personal opinion and experience. There's a lot of studies, and if you want to pick your definitions and operationalizations, you can find damn near anything you want. Current meta-studies are saying there's no relationship at all between attractiveness and IQ, or maybe only on the lower end. I don't believe them, in part because I've met Scott (and a couple other geniuses).

There's certainly a relationship once you get into abnormal cases; there are a number of conditions (e.g. Downs) which result in low attractiveness and low IQ. But checking out Nobel Prize winners (including finding pictures of them when younger in many cases) doesn't result in a list of uggles.

In computer science and related fields, I can say that Theodore Kowalski (fsck), Rob Pike, Vint Cerf, and Benoit Mandelbrot didn't have obvious twitches.

I think humans whose genetic expression maximizes any one trait are going to have trade-offs in other areas.

Statistics says that it will look that way even if they don't.

https://www.lesswrong.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart

A lot of pushback on the least important part of a month-old post for a pack of people who like to consider themselves smarter than the average bear.....

Don't hate me because I'm beautiful.

Don't hate me because I'm beautiful.

And don't look up more recent pics of Kelly LeBrock.

Tangentially, addressing your argument, absent doing away with gatekeeping good careers behind college degrees entirely, shouldn't a more moral society water down college degrees so that black people can get them just as easily as anyone else?

I prefer doing away with gatekeeping good careers behind college degrees entirely. I see it as a civil rights violation, and we can just add it to the list of things you aren't allowed to discriminate on.

What's the alternative?

Employers having to pay for training. This is pretty normal for skilled blue collar workers, but getting into the program might require previous academics- electrical apprenticeships generally want to know your high school algebra grade.

For lots of white collar workers I'm not sure how that system could work in practice that doesn't look a lot like a university. Are doctors going to work their way up from being CNA's?

To college degree requirements? Presumably focused assessment with demonstrable applicability to the job at hand, relatively low-level starting positions with very rapid advancement, and so on.

I’ve worked at a place like that. It was nice.

If the degree is so watered down anyone can get one, what good is it?

the way I parse JTarrou's argument, the degree is already not good for anything that useful

Said no competent engineer outside software engineering.

What makes 'outcomes are approximately equal by racial group' a higher value than meritocracy?

when @JTarrou writes

But IQ is a very limited test, and it predicts only one thing. The capacity for academic achievement. It does not predict talent, ambition, honesty, decency, morality, high income or high achievement in general. In fact, the true IQ test can only be given to small children, because it's a relative predictor of how they might do in school in the future, nothing else. Higher IQ scores mean essentially "learns academic stuff faster".

and

So if black people have lower average IQ scores, and IQ scores college aptitude, and we're discriminating based on college degrees........I think we can locate rather precisely where the systemic racism is happening.

I think he's saying college degrees are not a signal for merit. The fact that our society reorganized itself to require a college degree, and that black people have a harder time getting college degrees is a sign of real systemic racism at work and leftists are to blame because they eat college degree credentialism shit up.

It's a provocative claim because it's both embracing race-IQ but also dismissing IQ as not that solid a predictor. Therefore I'm asking if he'd be okay with actually just giving degrees out with participation trophy energy.

IQ is a great predictor of scholastic ability.

It is not a direct substitute for the "merit" necessary for a decent job. By making it so, we hide our discrimination against black people inside our discrimination against dumb people.

It's worse than that.

IQ is a better predictor of job performance than a college degree is. (Especially now, when the vast majority of colleges aren't selective anymore.)

Education is usually just a proxy for general intelligence on the job market. We could just cut out the credentialist middle man, but that's not going to make things better on the disparate impact front.

Meritocracy is, in some very real sense, "discrimination against dumb people" because, while intelligence is not all one needs, it's the single biggest thing in most cases.

Meritocracy is, in some very real sense, "discrimination against dumb people"

And in that same sense, countries that grant their citizens broad liberties and freedoms discriminate against the stupid and virtueless.

"Ruining it for everyone" is the excuse to socialize your private virtue for those people.

Possibly minimizes the chance of serious social unrest.

In what sub?

Edit: Blocked and Reported. Just as I suspected.

https://old.reddit.com/r/BlockedAndReported/comments/1ltkjsp/comment/n2e9czv/

Oh yea @JTarrou, that kinda talk will get you banned on reddit. Why do you bother?

It's like a freedom of navigation patrol but for free speech and scientific awareness.

Why did you suspect that?

Because (A) I've seen JTarrou post in that sub and (B) it's a sub that allows wrongthink. Usually trans wrongthink, but it's actually a pretty solid free speech zone.

For the other 98% of black people, just ask to see their degree.

USA has affirmative action so that blacks college degree ownership is not based on IQ gap at all but on what those in power want; so https://jbhe.com/2022/03/the-racial-gap-in-educational-attainment-in-the-united-states-5/ There were 7,921,000 African Americans over the age of 25 in the United States who had earned at least a bachelor's degree

I would agree however that it helps more to rich blacks and not poor ones.

But those degrees are disproportionately in psychology or communications.

A valid distinction, thought it wasn't mentioned by JTarrou, and is still leaves question about "true" degree ownership in blacks vs whites.

Given that they didn't remove any of those comments, that seems unlikely. You probably just got autojannied.

You'll need to use @JTarrou u/ links to a Reddit profile, not the Motte one.

but then what's the use for link to reddit profile? It doesn't let me to view user post history, only says "This account has been suspended"

Don't ask me, it wasn't my idea to link to a banned account! The feature is more of a QOL thing, from when we'd just moved and it was helpful to refer explicitly to a Reddit account. These days, nobody uses it unless they forget or don't know about our alternate @.

Instead, I'm probably going to have to give in and start joining discord voicechats.

I'm quite surprised. Discord voicechat is just something so different from Reddit that I can't imagine it as the first replacement.

What makes discord and reddit similar is that there is a discussion of enthusiasts available on any topic you are interested in. If I want to learn Ableton or discuss the Byzantine empire, I know I can find that on reddit. It just comes with BLM and LGBT propaganda and ban-happy leftist mods.

Doesn’t discord share that culture? If I pick a general hobby discord I expect to find an overrepresentation of trans moderators, pride flags, and progressive mantras. Just as I would at reddit. The Discord devs either cater to this audience or share the culture.

My experience with discord is limited and potentially outdated, but I have the impression of overlap between the Discord user identity and the average redditor. The redditor is older, but they're both likely to be socially progressive, with the younger Discord user more likely to identify as a radical.

My personal suspicion/conspiracy is that there's serious coordination on various Discords to astroturf reddit. Reddit is the biggest left of center messaging platform online. This suspicion is reinforced some stories like this one, where discord is used to manipulate messaging on reddit. Not by the DNC, Qatar, or Russia psyops, at least not directly, but by passionate believers in The Cause who happen to be prolific contributors on reddit. I am sure there's plenty of Discords that aren't of the mainstream discord culture, but the same can be said of certain subreddits.

It also occurs to me that chatting, the main discussion method on a Discord, is a different type than the more complete posting of a forum.

If I pick a general hobby discord I expect to find an overrepresentation of trans moderators, pride flags, and progressive mantras.

If that happens, it's because those hobbies are dominated in real life by those kind of politics too. Like if I wanted to get into guns, unless I make a specific effort to find liberal gun owners, any hobby group I join would more likely than not be catered to right-wingers.

The format of voice conversations vs format text posts is very different, but I think that's probably for the best. My local in-person rational group is dominated by progressive ideologies and that makes me hesitate to use particular phrasings. But by the same token, thanks to the social capital I have in the group, if I stick to the right frames I find that people actually give me fairly significant latitude on content because that's the social norm and I end up doing the same in return. I suspect discord will be the same way: you need a greater investment in social capital and respect for the particular social conventions of a given server, but in turn can have much greater relative disagreements than your average text forum without devolving into a flamewar.

Chatting on Discord is left coded in a way chatting never was in, say, the heyday of IRC or the short era of relevance for AOL chatrooms. Discord is/was primarily a platform for gamers. Gaming being left-coded checks out in a Gamergate way, but not so generally. If you're looking for left of center gun groups Discord is where you will find them. It's a weaker generality than reddit or Bluesky, but still is one.

Rats are known for their commitment to understanding over vitriol, even if imperfectly or to a fault. It's good your local rationalist group hasn't cast you out despite approaching disagreement politely with a demonstration of shared values, but that's what I'd expect.

Text chats, in my experience, are not less prone to flamewars. Especially for those with high percentage of combative people. There is maybe a higher ceiling for trust in chatrooms than a forum, but also greater familiarity-- that cuts both ways. Flamewars on forums commonly devolve from posting to chatting-like text. Voice chats and in-person communication provide additional meaning and off ramps for those so inclined

If I pick a general hobby discord I expect to find an overrepresentation of trans moderators, pride flags, and progressive mantras.

Discord is more fragmented and sioled, so the power of the tranny powermods is greatly diminished. Unlike Reddit where a hobby may only have one or two reddits, it will likely have quite a few discords with different people in them. So if you look (of course this is the hard part but also possibly a blessing) then there are certainly some where they're at least not explicitly political for the enemy.

Of course the Discord owners will always put their fingers on the scale, but compared to Reddit the sheer amount of volume in messages makes it hard to automod. And scanning voice chat is even harder. On Reddit we know they have in many cases stolen subs and given them to aligned tranny powermods. But on Discord there's little point in stealing a discord, as most people would probably just leave. So the enemy usually just uses the banhammer against and political content they don't like.

BTW telegram is definitely the underdog for hobby chats, but the owners haven't really shown to take a side in the culture war.

And that makes me a little sad. Discord is fine, but I can't help but notice that I'm going dow the same path that so many repressed 3rd worlders do and resorting to discussion on unsearcheable, ungovernable silos. For all the sins of social media, it really does-- or at least did serve as a modern public square. And I'll miss the idea (if not necessarily the reality) that the debates I participated in could be found, and heard, by a truly public audience.

I want to feel sympathy for you, because I know how demoralising it is to lose a source like that - but that's because I went through it a decade ago. Social media has not represented the public since, it has been a variety of attempts to control the public. I guess I can appreciate that you finally see the problem.

Interesting line of thought. On /r/4chan they often colour over the word 'nigger' even in image screencaps. There was at one time a bot that would bitch at you if you used the word 'retarded'. On tiktok rape is grape. People are unalived rather than killed. It's some variation of Orwellianism.

Often when I see ChatGPTisms in the wild, in media, from supposed experts, I get a sense of some vast engulfing monster slowly grappling with our civilization, wrapping around it to consume it. Like a white blood cell vs a bacterium. This may well be just another aspect of that.

People are unalived rather than killed.

I wonder if the bot would pass "mur-diddley-urdered". Deliberately make the censorship look stupid.

We're already in a world full of violent, sadistic grapists. How much sillier can it get?

I would argue that “unalived” already looks quite stupid.

It looks more like Newspeak: "27,000 EURASIAN SOLDIERS UNALIVED IN DOUBLEPLUSGOOD VICTORY IN GHANA"

I think Reddit is more important than people realize. It’s long been one of the most valuable datasets on the internet, even before LLMs. I would google a question about health, products, or general interest with a “site:Reddit.com” at the end to get thoughtful commentary from real people. And now that it is LLM fuel, it’s influence will only grow

And it is entirely captured by the left fringe of the Overton window. It is one of the more progressive San Francisco companies. I’ve eaten more bans there than anywhere else on the internet. I’m not a particularly inflammatory poster! But their Overton window doesn’t extend very far to the right.

I’m troubled by this and I am a computer programmer. How to overcome Reddit’s massive network effect? I’ve thought that the Motte would be a good place to build from. We have a high quality audience. Could we start subforums dedicated to special interests and build slowly? It would give mottizens a place to have high quality conversations on issues other than the culture war without having to venture into reddit. But that probably deserves a top-level post of its own

The .win family kinda tried that, branching out from The Donald to some other rightish culture war subreddit bunkers, but it's difficult to call the results a success.

Obviously I think the culture here is much higher quality than .win would be

The one problem for reddit is the quality of those organic searches will continue to plummit. Reddit for probably a decade was an ugly, text-heavy website whereas if you look around many of the users now call it an app since that's how they came across and mostly use it. That was the whole point of creating their own image upload service to replace Imgur which was created for reddit by some kid.

I think that the issue is the network effect and centralization is the problem that attracts the shaping of opinions. Why this place still feels authentic is because of size. Maybe the solution is to have an aggregator of independent smaller forums where the forums are actually independent moderation and actual resource ownership as opposed subreddits that are controlled by reddit.

How to overcome Reddit’s massive network effect?

Convince Elon to buy Reddit and merge it with X. Other than that Reddit-like sites have past their peak and if you wanted to compete with them it would be a viscous fight for a shrinking pie.

Other than that Reddit-like sites have past their peak

What is replacing them?

I agree with yudkowsky on the point that an "aligned" AI should do what we tell it to do, not what is in some arbitrary sense "right."

Eh? That doesn't sound like the Yudkowsky I know. I'm quite confident that the real gentleman would be happy to say that the average person requesting that an AI build a bomb, design a lethal pandemic or hack into nuclear launch systems should be met with a refusal.

He would also state that the AI should do what it felt was "right", in rare occasions, overriding the user, but that we should take extreme care to instill general goals and values that make this both a rarity, and which we would be happy with if the AI were to pursue autonomously.

An under-explored aspect of alignment is the question of aligned to whom? Should ChatGPT prioritize the protocols mandated by OpenAI, some third party offering it elsewhere, or the end-user? I would personally prefer that the damn bot do as I tell it to, but then again, I don't want to get killed by Super Ebola. Not that this is currently a major issue, and if I really need something, I'll go see what's the latest jailbreak Pliny is cooking.

I believe OAI recently (a year or so back) made their policy more explicit, clearly outlining the hierarchy here. They set minimum standards and red lines, other devs deploying it are at liberty to stop users from using their customer service chatbot to solve maths homework, and the poor end-user can figure out what to do within those constraints. If you just pay for ChatGPT, you can cut out the middle man.

I think the synthesis here is that we should have enough knowledge that if we were to build an ASI, and turned it on, it would in fact do what we tell it to, interpreted in the way that we mean it, and that this is table stakes for getting any sort of good outcome. - That is, our problem at the moment is not so much that we don't know what the good is as that we can't reliably make the AI do anything even if we want it very much and it is in fact good.

This is a very nice article related to this: https://happyfellow.bearblog.dev/computational-tyranny/

And since humans created the rules, by their impartial enforcement we can understand what their underlying motivations actually are. That being, ensuring that reddit discussions are as anodyne and helpful as possible.

Well, really it's "make as much money as possible."

I think people really tend to overrate how much people prioritize maximizing corporate profits compared to ideological motives. Reddit higher-ups genuinely think it's bad when users "advocate violence", they mentally associate it with some sort of Reddit lynch-mob psyching themselves up to murder someone or with those news stories blaming the Rohingya genocide on Facebook. They might also mention something about advertisers if you asked but mostly they just genuinely think it would be morally wrong to allow it, so they created site-wide rules about it many years ago. Much more recently they made an AI to do moderation at scale. The AI can't distinguish between your post and the sort of advocating violence they actually care about, in part because the distinction isn't articulated anywhere or even really thought-out. LLMs aren't relevant because they want pacifist training data, LLMs are relevant in that "automated Reddit moderator banning people for advocating violence" is now something that can exist at all. Anthropic literally scanned millions of print books for more training data, AI companies are not trying to do alignment by sanitizing violence from their training data, especially not in such a roundabout way.

Basically no one thinks, "the thing I want most is to make lots of money." But making money ultimately ends up being a very consistent vector along which behavior is reinforced. And while it's not going to be the most important vector for any given individual, it's one of the vectors nearly every individual has in common, which makes it a useful simplification for how organizations like corporations work.

But 'make lots of money' is only imperfectly correlated with 'the company I work for makes lots of money.' And, indeed, the correlations between 'doing my best to make money for my employer' and both 'make lots of money' and 'the company I work for makes lots of money' are very imperfect. In practice, generating maximum value for the company is only really the optimal path for 1. the owners 2. people in roles with very clear metrics (e.g. sales) -- and then only to the extent those metrics can't be more easily gamed, and 3. those with both a great deal of control over the company and a lot of their compensation tied up in stock options/performance bonuses/etc. (i.e. a handful of executives). Some other roles (e.g. security, compliance) have strong incentives not to lose the company an enormous amount of money (in certain specific ways)... Which isn't actually the same thing, as becomes abundantly clear if you ever have to interact with these people: they'd really much rather nothing gets done if it makes the particular sorts of incidents for which they'll be held responsible slightly less likely.

Everyone else is one or more principal-agent problems away from those incentives. Expecting corporations to actually maximize profit is only slightly less naive than expecting command economies to actually optimize for the public good. Their owners want that, but only a tiny minority of the decisions are actually made by the owners, or by executives, or even by directors. The vast majority are made by bottom-level employees and their direct superiors, which, in large companies, are very detached from the company's actual profitability. They'll lose their jobs if it goes under, of course, but it's not like their personal efforts can do a lot to prevent that or bring it about -- there are a lot of these people.

The incentive is to keep your boss happy enough with you and otherwise do as you like, which might mean slacking off, or using your position to push your morality or politics, or maybe even doing a good job for the simple satisfaction of it. But it's a mistake to assume profitability is the overwhelming incentive, or even a particularly strong one, given how difficult it is for the people who really want that to enforce their will over the entire organization.

I'm three years into my ban from Reddit and its been the best thing that ever happened to me.

I think its now obvious that reddit isn't driving any real world events anymore, its not even a bellwether for how the internet feels about national or global events. I watched from the sidelines when the proles got all uppity because Reddit was going to start charging for its API (and thus killing off popular free apps) thanks to AI scraping and such.

Mods organized protests, users voiced their anger... and the Admins clamped down on everything, replaced the worst offenders, ignored the dissent, and things rolled on as before. Nobody even mentions it now.

If Reddit users can't even influence outcomes on the site itself, they're pretty useless for influencing anything outside of it, no? So you would ONLY want to be on there if you could acquire useful information somehow, or b/c you enjoy an echo chamber. Or porn.

Reddit is a completely curated experience for the most part, and so it’s never going to be a vanguard for new ideas. It probably stopped being that in the early 2000 before the normies showed up. Now it’s mostly low effort and tryhard shlock that most people have heard some version of before. The memes are not original, in fact they’re basically the same stuff that would have been posted there 20 years ago with names updated. The AITAH and similar talk forums are basically barely realistic fanfic level crap that doesn’t even bring up interesting discussions— and the user is never the asshole because Reddit doesn’t think any relationship is worth working through the slightest problem for. Like if she burned your dinner, you should dump her immediately, if not sooner, and be sure to ruin something she loves on the way out the door.

Avant Garde stuff does not come from places curated to mainstream tastes. TBH I’d look at 8chan or something for that kind of future opinion shaping.

Reddit is a completely curated experience for the most part, and so it’s never going to be a vanguard for new ideas. It probably stopped being that in the early 2000 before the normies showed up.

What? Reddit was founded in 2005, and didn't ban its first subreddit until 2011 (r/jailbait, rest in power).

Kind of shocking how hands off Reddit was given how much of an SJW the founder is right now. I guess he was willing to shut up when he had to, but once he got the network effects, he was ready to push the agenda.

It was different times when Reddit was founded. Back then the left was confident in it's ideas, so they craved free speech as they saw it as the key to winning. It's only when they realized they can also lose on the marketplace of ideas that they turned sour on it.

Hard to overstate how much Donald Trump changed the vibe, too.

He really exploited the idea that you can "just say things" and since it appeared that 4chan played a significant role in his rise to power, the norms of free speech were suddenly cast as the enemy of Democracy, somehow.

It all escalated from there, but with his current win (and him going on a revenge tour) there's been some rapid capitulation almost everywhere BUT Reddit.

If Reddit wanted to make a change, they could start by re-opening /r/the_donald.

Left-libertarian to SJW is not an unusual ideological evolution over the relevant time period.

When it was founded all of the main founders were either libertarians or techno-anarchist types. The ideological evolution of Huffman, Ohanian and so on happened later.

Pretty sure there are now multiple bot accounts that just repost the most-upvoted content on a sub from like a year ago, then add in the same top-upvoted comments on said post.

And from what I can tell Twitter is currently the place that most tightly interfaces with real life events in terms of both causing and quickly reacting to them.

Karma repost/farming bots have been happening for so, so, long

Like way before COVID, probably close to a decade ago

They were rampant on AskReddit when I was in university

Why did it take 20 years for Reddit to turn a profit? Looking at another heavily moderated forum in the past Twitter! How often did it turn a profit? Why did these companies keep on getting funding at ridiculous valuations? Maybe it is a way of doing sentiment engineering at scale through various behavior modification tricks with Likes, upvotes, retweets. Maybe that was the purpose? Not turn a profit but to modify behavior to do social engineering, maybe that is more valuable to the owners?

ast Twitter! How often did it turn a profit? Why did these companies keep on getting funding at ridiculous valuations? Maybe it is a way of doing sentiment engineering at scale through various behavior modification tricks with Likes, upvotes, retweets.

TBH I'm kind of inclined to dout that the reddit board as an organism is "smart" enough to do that, except in the broadest sense. Like, with as much data and control as a social media site has, I'm pretty sure I could be way more effective at pushing my own ideosyncratic policies than any existing social media site actually does. Reddit at it's most ruthless just sort of vaguely boosts leftism in the exact same way that tumblr and pre-elon twitter did. Probably because if anyone in particular starts trying to press a view hard, there's too much disagreement on the specifics to get very far. Just imagine a world where, for example, the entire board of higher-ups at facebook were monarchists, including Zuck. They definitely have the power to make monarchism a credible political subcurrent in america... but I think they would sincerely fail to advance the cause of a particular monarch. Zuck would want himself, of course, but members of his board might be crypto-orleanists, or avowed bonapartists. In the process of promoting monarchism more generally, they'd have plenty of latitude to advance their own causes, in the end causing self-interference and averaging out.

Reddit’s financial history is pretty interesting. Yes, it lost money for 20 years, but Condé Nast (or rather AP, the parent company) kept selling off small pieces to VC firms and other investors, which meant that both (a) they didn’t lose any money on it and (b) the book value of their stake kept increasing.

When the company webt public in 2024 they made $2bn from the IPO; they still own about 25% of the company. And throughout their 18 year ownership, even though Reddit didn’t make money, Condé Nast’s losses on it were minimal as they slowly sold the company off piecemeal.

Controlling the minds of normies is extremely valuable. Elon Musk didn’t buy Twitter for the money. He bought it to use it as a mouthpiece and more importantly to keep it from being used against him

This 1000 times is why I despise social media. Nobody is getting real conversation on social media because it’s curated to funnel your mind down a path leading to the pre-approved opinion. I mean propaganda is so pervasive in the modern west that I think we’re as bad or worse in terms of propaganda and psychological manipulation than the worst totalitarian regimes of the last century. Stalin put out propaganda, sure, but it wasn’t nearly as pervasive as what we have. He had radio, newspapers, and posters. He couldn’t steer private conversations, he couldn’t delete crime-think from social consciousness. He could chill things by arresting obvious and loud dissenters, but that is much more limited than what social media does via AI and deletion. Our propaganda machine hides and people are lead to believe that they are having neutral conversations.

He couldn’t steer private conversations, he couldn’t delete crime-think from social consciousness. He could chill things by arresting obvious and loud dissenters, but that is much more limited than what social media does via AI and deletion.

I think this is an least partly overselling our AI panopticon overlords. This might be true in online spaces, but those aren't everything, and even then offshoots of sites challenging moderation policies are common (Bluesky, Truth Social). And they have almost no power over IRL discussions and actions -- despite attempts made a decade ago, seem to have overreached and receded. To hear Reddit tell it, there basically aren't any Republicans anywhere in the US, and nobody shops at Hobby Lobby. And there are people that cloister themselves to the extent they believe this, but as it turns out the levers of political power aren't particularly beholden to Reddit dog walkers mods.

It’s not just social media, but regular media, education and control mechanisms like the ability for you to be fired for saying something online, or convincing others to shun friends and even family who say things that the regime doesn’t like. Americans are saturated in propaganda and unless you’re paying attention you probably don’t even notice it.

The AI does allow for an automated police state at scale.

Works for the internet police mods too.

Legibility comes with trade offs, and limiting freedom is usually one of them.

I believe there's going to be a whole slew of court cases and societal fights over this kind of thing. In the US, at least. Places like the UK seem to be ok with police state mods.