site banner

Culture War Roundup for the week of October 27, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Elon Musk just launched Grokipedia, a kanged version of wikipedia run through a hideous AI sloppification filter. Of course the usual suspects are complaining about political bias and bias about Elon and whatnot, but they totally miss whole point. The entire thing is absolute worthless slop. Now I know that Wikipedia is pozzed by Soros and whatever, but fighting it with worthless gibberish isn't it.

As a way to test it, I wanted to check something that could be easily verifiable with primary sources, without needing actual wikipedia or specialized knowledge, so I figured I could check out the article of a short story. I picked the story "2BR02B" (no endorsement of the story or its themes) because it's extremely short and available online. And just a quick glance at the grokipedia article shows that it hallucinated a massive, enormous dump into the plot summary. Literally every other sentence in there is entirely fabricated, or even totally the opposite of what was written in the story. Now I don't know the exact internal workings of the AI, but it claims to read the references for "fact checking" and it links to the full text of the entire story. Which means that the AI had access to the entire text of the story yet still went full schizo mode anyways.

I chose that article because it was easily verifiable, and I encourage everyone to take a look at the story text and compare it to the AI "summary" to see how bad it is. And I'm no expert but my guess is that most of the articles are similarly schizo crap. And undoubtedly Elon fanboys are going to post screenshots of this shit all over the internet to the detriment of everyone with a brain. No idea what Elon is hoping to accomplish with this but I'm going to call him a huge dum dum for releasing this nonsense.

So out of curiosity I opened Grokipedia up and searched the page for New York Yankees, a topic I know enough about to spot errors or omissions pretty well. It's...fine, but the verbiage is kind of off, and the editing is weird. The choice of which facts are important to fit into the article is distinctly odd. It inserts facts at random points, like this paragraph near the top:

On June 25, 2019, they set a new major league record for homering in 28 consecutive games, breaking the record set by the 2002 Texas Rangers. The streak would reach 31 games, during which they hit 57 home runs. With the walk-off solo home run by DJ LeMahieu to win the game against the Oakland Athletics on August 31, 2019, the Yankees ended the month of August that year now holding a new record of 74 home runs hit in the month alone, a new record for the most home runs hit in a month by a single MLB team.

Which is true, as far as I know, but not a record that anybody really cares about compared to about a million other things that the Yankees have done. It's a lot of text to cover a fairly obscure statistical record. While ignoring, within the "Distinctions" heading, a lot of more important Yankees accomplishments and records that a human would think of first like the streaks of winning seasons etc.

The whole piece steadfastly refuses to achieve any narrative flow at any point, never achieving a cohesive story structure. And it seems to lack the fundamental feature of Wikipedia: links between articles allowing me to learn more about a topic and dive down a Wikipedia hole, there is no Grokipedia hole unless I manually dig it.

On the other hand, the article structure and style is just copied from Wikipedia and slightly shuffled. Significant word for word sentences of the article seem to be directly pulled from Wikipedia, which was almost certainly within the training data used to make these articles, so actually what we seem to be dealing with here is better thought of as a fork than a competitor or alternative to Wikipedia. As human editing smooths out the rough edges of the AI, it'll get better over time. Though at that point, what is the use? It's mostly just Wikipedia copied.

I'll put a disclaimer here that I'm not someone with an Elon Musk hate-boner, but I do think that Elon is the fly in the ointment here. Grok has publicly done weird shit in the past, that was obviously the result of direct meddling, like the South African White Genocide fiasco. We know in advance that some articles are not going to be maximally accurate, but instead be designed by Elon to look the way Elon wants them to look. So you really can't trust Grokipedia, or Grok, without knowing Elon's Special Interests and where they might get you into trouble. I know there are going to be some articles on Grokipedia that will be edited in a certain way.

Which puts Grokipedia in basically the same category I use Grok for more generally: as an alternative source to double check on something I already looked up elsewhere, a sanity check for alternative views. Normally more prosaically, I punch a question into ChatGPT then punch the same question into Grok and see if they agree. Now we can do the same with Wikipedia. That's a useful enough thing.

I suspect for xAI, Grokipedia is actually more useful as an answer repository for simple questions asked to the chatbot that can be tied directly into the program more easily. The next non-American that asks "Who or what are the New York Yankees?" can be answered with a summary of the already-created Grokipedia article.

The final two paragraphs of your comment are close enough to some thoughts I've had swimming in my head for some time now. The real step-function in AI development will be something like a structured reasoning engine. Not a fact-checker. Just a 'thing' that can take the axioms and raw input data of an argument or even just a description and then build an auditable framework for how those inputs lead to a conclusion or output.

Using your Yankees example, this structured reasoning engine would read it, check that all of the basic quantitative numbers are valid, but then "reason" against a corpus of other baseball data to build out something like: Yankees hit lots of home runs in august --> home run hitting is good and important --> records are also important in baseball --> oh, we should highlight this home run record setting august for the yankees!.

You can see the flaw in that flow easily. The jump between "home runs and records are important" followed by the desperate need to "develop" a record which results in shoe-horning of significance to collective number of team home runs in a specific month. A prompt engineer could go back through the sequence and write in something like "annual home runs by single players are generally viewed as significant. Team level home runs are less important" or whatever opinion they have.


The "reasoning" engines that exist now aren't reasoning. They're just recursive loops of LLMs thinking about themselves. We've successfully created digital native neuroticism.

It's an interesting problem and balancing act. The power of LLMs is that their structure isn't exactly deterministic. Yet, we would love a way to create a kind of "synthetic determinism" via an auditable and repeatable structure. If we go to far in that direction, however, we're just getting back to traditional programming paradigms (functional, object oriented, whatever) and we lose all of the flexibility and non-deterministic benefits of LLMs. Look at some of the leaked system prompts. They're these 40,000 word markdown files with repetitive declarative sentences designed to make sure the LLM stays in its lane.

Using your Yankees example, this structured reasoning engine would read it, check that all of the basic quantitative numbers are valid, but then "reason" against a corpus of other baseball data to build out something like: Yankees hit lots of home runs in august --> home run hitting is good and important --> records are also important in baseball --> oh, we should highlight this home run record setting august for the yankees!.

What further AI development would avoid is including a record that no one really cares about in prime real estate within the article. That's a cool record, one that a color commentator brings up during the broadcast when watching the game, and afterward gets cited in a quick ESPN or fan-blog article, then totally forgotten until another team gets close to the record and they show the leaderboard during a game. It's not something fans care about on the day-to-day, no Bleacher Creature ever brags about the team holding the monthly Home Run Record.

I suspect the answer is more prosaic: the record setting August outburst was recent enough to be highlighted in one or more online articles, which Grok found while writing the article and included in the data. Where various great things that Dimaggio and Berra did aren't covered as heavily online. An old timer fan is much more likely to brag about Rivera's save record, Dimaggio's hit streak, Berra having a ring for every finger, Ruth being the GOAT, or Judge holding the "clean" HR record. Those would be things to cite in the article over the monthly HR record getting a paragraph.

It's the ability to reason your way to judgment, or wisdom, not knowledge.

Great take.

It's the ability to reason your way to judgment, or wisdom, not knowledge.

AI development is either going to be the Super Bowl for philosophers or their final leap into obscurity. Maybe both?