This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
The final two paragraphs of your comment are close enough to some thoughts I've had swimming in my head for some time now. The real step-function in AI development will be something like a structured reasoning engine. Not a fact-checker. Just a 'thing' that can take the axioms and raw input data of an argument or even just a description and then build an auditable framework for how those inputs lead to a conclusion or output.
Using your Yankees example, this structured reasoning engine would read it, check that all of the basic quantitative numbers are valid, but then "reason" against a corpus of other baseball data to build out something like: Yankees hit lots of home runs in august --> home run hitting is good and important --> records are also important in baseball --> oh, we should highlight this home run record setting august for the yankees!.
You can see the flaw in that flow easily. The jump between "home runs and records are important" followed by the desperate need to "develop" a record which results in shoe-horning of significance to collective number of team home runs in a specific month. A prompt engineer could go back through the sequence and write in something like "annual home runs by single players are generally viewed as significant. Team level home runs are less important" or whatever opinion they have.
The "reasoning" engines that exist now aren't reasoning. They're just recursive loops of LLMs thinking about themselves. We've successfully created digital native neuroticism.
It's an interesting problem and balancing act. The power of LLMs is that their structure isn't exactly deterministic. Yet, we would love a way to create a kind of "synthetic determinism" via an auditable and repeatable structure. If we go to far in that direction, however, we're just getting back to traditional programming paradigms (functional, object oriented, whatever) and we lose all of the flexibility and non-deterministic benefits of LLMs. Look at some of the leaked system prompts. They're these 40,000 word markdown files with repetitive declarative sentences designed to make sure the LLM stays in its lane.
What further AI development would avoid is including a record that no one really cares about in prime real estate within the article. That's a cool record, one that a color commentator brings up during the broadcast when watching the game, and afterward gets cited in a quick ESPN or fan-blog article, then totally forgotten until another team gets close to the record and they show the leaderboard during a game. It's not something fans care about on the day-to-day, no Bleacher Creature ever brags about the team holding the monthly Home Run Record.
I suspect the answer is more prosaic: the record setting August outburst was recent enough to be highlighted in one or more online articles, which Grok found while writing the article and included in the data. Where various great things that Dimaggio and Berra did aren't covered as heavily online. An old timer fan is much more likely to brag about Rivera's save record, Dimaggio's hit streak, Berra having a ring for every finger, Ruth being the GOAT, or Judge holding the "clean" HR record. Those would be things to cite in the article over the monthly HR record getting a paragraph.
It's the ability to reason your way to judgment, or wisdom, not knowledge.
For something like this, I don't think any reasoning would be needed, or any significant developments in AI development. I don't see why simple reinforcement learning with human feedback wouldn't work. Just have a bunch of generated articles judged based on many factors of that go into how well written an encyclopedia entry is, including good use of prime real estate to provide information that'd actually be interesting to the typical person looking up the entry rather than throwaway trivia. Of course, this would have to be tested empirically, but I don't think we've seen indications that RLHF is incapable of compelling such behavior from an LLM.
More options
Context Copy link
Great take.
AI development is either going to be the Super Bowl for philosophers or their final leap into obscurity. Maybe both?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link