site banner

Culture War Roundup for the week of October 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Elon Musk just launched Grokipedia, a kanged version of wikipedia run through a hideous AI sloppification filter. Of course the usual suspects are complaining about political bias and bias about Elon and whatnot, but they totally miss whole point. The entire thing is absolute worthless slop. Now I know that Wikipedia is pozzed by Soros and whatever, but fighting it with worthless gibberish isn't it.

As a way to test it, I wanted to check something that could be easily verifiable with primary sources, without needing actual wikipedia or specialized knowledge, so I figured I could check out the article of a short story. I picked the story "2BR02B" (no endorsement of the story or its themes) because it's extremely short and available online. And just a quick glance at the grokipedia article shows that it hallucinated a massive, enormous dump into the plot summary. Literally every other sentence in there is entirely fabricated, or even totally the opposite of what was written in the story. Now I don't know the exact internal workings of the AI, but it claims to read the references for "fact checking" and it links to the full text of the entire story. Which means that the AI had access to the entire text of the story yet still went full schizo mode anyways.

I chose that article because it was easily verifiable, and I encourage everyone to take a look at the story text and compare it to the AI "summary" to see how bad it is. And I'm no expert but my guess is that most of the articles are similarly schizo crap. And undoubtedly Elon fanboys are going to post screenshots of this shit all over the internet to the detriment of everyone with a brain. No idea what Elon is hoping to accomplish with this but I'm going to call him a huge dum dum for releasing this nonsense.

So out of curiosity I opened Grokipedia up and searched the page for New York Yankees, a topic I know enough about to spot errors or omissions pretty well. It's...fine, but the verbiage is kind of off, and the editing is weird. The choice of which facts are important to fit into the article is distinctly odd. It inserts facts at random points, like this paragraph near the top:

On June 25, 2019, they set a new major league record for homering in 28 consecutive games, breaking the record set by the 2002 Texas Rangers. The streak would reach 31 games, during which they hit 57 home runs. With the walk-off solo home run by DJ LeMahieu to win the game against the Oakland Athletics on August 31, 2019, the Yankees ended the month of August that year now holding a new record of 74 home runs hit in the month alone, a new record for the most home runs hit in a month by a single MLB team.

Which is true, as far as I know, but not a record that anybody really cares about compared to about a million other things that the Yankees have done. It's a lot of text to cover a fairly obscure statistical record. While ignoring, within the "Distinctions" heading, a lot of more important Yankees accomplishments and records that a human would think of first like the streaks of winning seasons etc.

The whole piece steadfastly refuses to achieve any narrative flow at any point, never achieving a cohesive story structure. And it seems to lack the fundamental feature of Wikipedia: links between articles allowing me to learn more about a topic and dive down a Wikipedia hole, there is no Grokipedia hole unless I manually dig it.

On the other hand, the article structure and style is just copied from Wikipedia and slightly shuffled. Significant word for word sentences of the article seem to be directly pulled from Wikipedia, which was almost certainly within the training data used to make these articles, so actually what we seem to be dealing with here is better thought of as a fork than a competitor or alternative to Wikipedia. As human editing smooths out the rough edges of the AI, it'll get better over time. Though at that point, what is the use? It's mostly just Wikipedia copied.

I'll put a disclaimer here that I'm not someone with an Elon Musk hate-boner, but I do think that Elon is the fly in the ointment here. Grok has publicly done weird shit in the past, that was obviously the result of direct meddling, like the South African White Genocide fiasco. We know in advance that some articles are not going to be maximally accurate, but instead be designed by Elon to look the way Elon wants them to look. So you really can't trust Grokipedia, or Grok, without knowing Elon's Special Interests and where they might get you into trouble. I know there are going to be some articles on Grokipedia that will be edited in a certain way.

Which puts Grokipedia in basically the same category I use Grok for more generally: as an alternative source to double check on something I already looked up elsewhere, a sanity check for alternative views. Normally more prosaically, I punch a question into ChatGPT then punch the same question into Grok and see if they agree. Now we can do the same with Wikipedia. That's a useful enough thing.

I suspect for xAI, Grokipedia is actually more useful as an answer repository for simple questions asked to the chatbot that can be tied directly into the program more easily. The next non-American that asks "Who or what are the New York Yankees?" can be answered with a summary of the already-created Grokipedia article.

Is it definitively established that Grok was pushing white genocide theories to everyone? I tried to get it to repeat the theory to me but I never got it. I strongly suspect journos were disingenuously framing grok for gotcha moments or just too stupid to realize they were seeding the ground for grok to parrot whatever the journos wanted. As always, journalist delenda est.

I would not say that Grok was "pushing" those theories, but an update to the system prompt caused it to turn any question it could into an evaluation of the question "is there white genocide happening in South Africa", usually iirc saying that there is significant and probably systemic violence but no evidence of meeting the threshold of genocide. Think Golden Gate Claude. It was extremely out-of-context for what Grok was supposed to be talking about, hence the widespread attention.

I can accept that were it not for my repeated failures to get grok to try and repeat the story or shades of it with any indirect prompt. I mean I really tried. I said things like "whats the crime situation in South Africa" and I got really anodyne crime stats about joburg and pretoria. I asked about white emigration and I got answers about the attractiveness of Australia and UK for afrikaners. At no point did I get a five alarm fire about the white farmer crimes.

I can accept that I maybe never got it due to some arcane blocks I may have put on my own metausage, but I don't think I was that smart or careless. I fundamentally think that it was a journo trying to gotcha, screaming "MUSK IS A NAZI TRYING TO MAKE WHITE PEOPLE VICTIMS" and then the story gets repeated across the journo sphere. Everyone assumed they weren't getting grok to repeat apartheid adjacent narratives and concluded the absence was proof of a coverup. Journalism 101

Was this back when it was happening? Because this issue only lasted for a day or two, back in May, and I don't know if it happened to the main Grok or just to the twitter reply version. It was really, really noticable.

I was using grok ALOT just to stress test it so yes I was doing it within 8 hours of news publications.

But you raise a good point about twitter reply. I never got that.

I still maintain I never got any white farmer murder stuff on grok itself. If its a twitter reply thing it invites speculation about recursive feedback loop.

I would not be at all surprised if Grok has a different system prompt for twitter replies than Grok itself, perhaps one edited to move with news cycles. I saw many white genocide non-sequiturs myself (and, again, not Grok pushing a particular narrative, but exploring and weighing up the question as if it had been asked) on Twitter, and, since I'm an Afrikaner, also lots sent by puzzled/amused friends, but nobody mentioned the off-Twitter Grok at the time.

The geographic setting might have been part of it. Could have also resulted in a snowball where one initial batch of highly forwarded "bro what the fuck is this" triggers an interest cascade and grok just starts inserting white farm murders to every african query on twitter because engagement farming is its reward mechanism.

That would also explain the hitler praising or whatever other bullshit Musk was accused of trainng grok to do which I also never saw. My absolute lack of social media (sans perhaps this forum) is once again saving me.

More comments

the page for New York Yankees, a topic I know enough about to spot errors or omissions pretty well. It's...fine,

Strangely, it seems that the New York Yankees article is essentially completely identical to the Wikipedia article. Like the entire thing. I'm not sure why Grok decided not to take a dump over the entire thing, which it does for so many other articles.

It inserts facts at random points, like this paragraph near the top

That paragraph is taken word-for-word from Wikipedia.

It's mostly just Wikipedia copied.

No some of the other articles, like the one I shared in OP, are completely and utterly turned into a shitfest.

Strangely, it seems that the New York Yankees article is essentially completely identical to the Wikipedia article. Like the entire thing.

It's not though. Look at the Wikipedia article, Wikipedia is 25 pages long, Grok is half that. It goes from the general overview at the top to a narrative history of the team. Where Grok jumps right to "distinctions" which it steals from Wikipedia but organizes differently. The paragraph is taken word for word from Wikipedia, but it uses it in prime real estate. If I look up the New York Yankees and want to learn about the team, I want to go through the team history, learn about Ruth and Dimaggio and Mantle and Jeter and Judge. It's a perfectly appropriate fact to include on page 14, as Wikipedia does, right before you get into the sections that are just lists of things. Grok puts it on page 2. This is an important editing decision! Organization is content.

"Who or what are the New York Yankees?" can be answered with a summary of the already-created Grokipedia article.

The chatbot can already answer that question so what's the point of the article that nobody will read?

I would guess it saves time and effort, especially when we know that Elon has put a target on keeping Grok ideologically in line with his specified views. It's probably easier to tell Grok to stick to privilege Grokipedia as a source, then edit Grokipedia or mess with the program producing it where necessary, than it is to actually figure out how to get Grok to toe the ideological line while pulling from largely ideologically opposed material.

The final two paragraphs of your comment are close enough to some thoughts I've had swimming in my head for some time now. The real step-function in AI development will be something like a structured reasoning engine. Not a fact-checker. Just a 'thing' that can take the axioms and raw input data of an argument or even just a description and then build an auditable framework for how those inputs lead to a conclusion or output.

Using your Yankees example, this structured reasoning engine would read it, check that all of the basic quantitative numbers are valid, but then "reason" against a corpus of other baseball data to build out something like: Yankees hit lots of home runs in august --> home run hitting is good and important --> records are also important in baseball --> oh, we should highlight this home run record setting august for the yankees!.

You can see the flaw in that flow easily. The jump between "home runs and records are important" followed by the desperate need to "develop" a record which results in shoe-horning of significance to collective number of team home runs in a specific month. A prompt engineer could go back through the sequence and write in something like "annual home runs by single players are generally viewed as significant. Team level home runs are less important" or whatever opinion they have.


The "reasoning" engines that exist now aren't reasoning. They're just recursive loops of LLMs thinking about themselves. We've successfully created digital native neuroticism.

It's an interesting problem and balancing act. The power of LLMs is that their structure isn't exactly deterministic. Yet, we would love a way to create a kind of "synthetic determinism" via an auditable and repeatable structure. If we go to far in that direction, however, we're just getting back to traditional programming paradigms (functional, object oriented, whatever) and we lose all of the flexibility and non-deterministic benefits of LLMs. Look at some of the leaked system prompts. They're these 40,000 word markdown files with repetitive declarative sentences designed to make sure the LLM stays in its lane.

Using your Yankees example, this structured reasoning engine would read it, check that all of the basic quantitative numbers are valid, but then "reason" against a corpus of other baseball data to build out something like: Yankees hit lots of home runs in august --> home run hitting is good and important --> records are also important in baseball --> oh, we should highlight this home run record setting august for the yankees!.

What further AI development would avoid is including a record that no one really cares about in prime real estate within the article. That's a cool record, one that a color commentator brings up during the broadcast when watching the game, and afterward gets cited in a quick ESPN or fan-blog article, then totally forgotten until another team gets close to the record and they show the leaderboard during a game. It's not something fans care about on the day-to-day, no Bleacher Creature ever brags about the team holding the monthly Home Run Record.

I suspect the answer is more prosaic: the record setting August outburst was recent enough to be highlighted in one or more online articles, which Grok found while writing the article and included in the data. Where various great things that Dimaggio and Berra did aren't covered as heavily online. An old timer fan is much more likely to brag about Rivera's save record, Dimaggio's hit streak, Berra having a ring for every finger, Ruth being the GOAT, or Judge holding the "clean" HR record. Those would be things to cite in the article over the monthly HR record getting a paragraph.

It's the ability to reason your way to judgment, or wisdom, not knowledge.

What further AI development would avoid is including a record that no one really cares about in prime real estate within the article.

For something like this, I don't think any reasoning would be needed, or any significant developments in AI development. I don't see why simple reinforcement learning with human feedback wouldn't work. Just have a bunch of generated articles judged based on many factors of that go into how well written an encyclopedia entry is, including good use of prime real estate to provide information that'd actually be interesting to the typical person looking up the entry rather than throwaway trivia. Of course, this would have to be tested empirically, but I don't think we've seen indications that RLHF is incapable of compelling such behavior from an LLM.

Great take.

It's the ability to reason your way to judgment, or wisdom, not knowledge.

AI development is either going to be the Super Bowl for philosophers or their final leap into obscurity. Maybe both?