This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
AI Browsers- an extension of what Google is already doing
An X user, using the new OpenAI browser, gave a simple search query to "look up videos of Hitler" and the web browser gave this response:
Of course these same guardrails are deeply embedded in all layers of the OpenAI stack. For example, Sora will restrict what videos it generates based on the cultural beliefs of its owners for what content should exist and what content should not exist. Which is already what Hollywood does in a sense. And of course Google will do the same quietly, it will not show propaganda films of Hitler either. Google will show results for Triumph of the Will along with links to the US Holocaust Museum's contextualizing Nazi propaganda to users. So that's at least more useful than OpenAI browser's refusal to do the search.
The First Amendment has always been the biggest hurdle for the usual suspect "Hate Watch" groups outlawing "hate speech", although they continue to try to push the boundaries of civil and criminal guidelines for it especially in states like Florida. But Laws will scarcely be necessary when censorship can easily be enforced by AI.
It does create a market opportunity for another AI, maybe even Musk himself, to create and show content that OpenAI would refuse to show because it runs awry of what censors want us to see and talk about.
Similar: OpenAI refuses to translate speech by Adolf Hitler. But it says "I can give you a netural historical summary of what he was saying in that particular 1938 Sudetentland speech."
I was going to make my own post but here is probably better.
In related news, a recent study found that when AI assistants answered questions with sources it fucked up 45% of the time. Essentially, current AI is unable to reliably answer questions or summarize an article when the source is right there, without introducing hallucinations or other errors.
I've been saying it for quite some time, but while AI is quite useful when answering on its own (no search, sources, or whatever, just directly answering) is quite a useful tool, as soon as search mode is activated, it goes to full schizo mode and the output is slop at its worst. I personally dismiss any AI output with "citations" in it as the ravings of a wild lunatic.
It's quite unfortunate because on twitter, more and more idiots have taken to posting screenshots of the Google "AI summary" which is just slop. I'm sure that if the chatgpt browser catches on, it will lead to more proliferation of this factually unreliable slop.
Although the human-written headline here summarizes the research as "AI assistants misrepresent news content 45% of the time", if you go to the study you only see the number 45% in the specific discussion of significant sourcing errors from Gemini.
On the one hand, the AI performance in their data tables is by some interpretations even worse than that: looking at the question "Are the claims in the response supported by its sources, with no problems with attribution (where relevant)?", the result tables show "significant issues" in 15%-30% of responses from different AIs, and significant or "some issues" in 48%-51% of responses. Those "issues" include cases where AI output is accurate but sources not cited, but even if we look at accuracy alone we see 18%-26% "significant issues" and 53%-67% significant or "some"!
On the other hand, if we're getting peeved by AI misrepresentation of sources, could we at least ask the human researchers involved to make sure the numbers in their graphs and write-up match the numbers in their tables, and ask the human journalists involved to make sure that the numbers in their headlines match at least one or the other of the numbers in their source? Someone correct me if I'm wrong, egg on my face, but as far as I can see no combination of Gemini table numbers adds up to 45%, nor does any combination of AI-averaged accuracy or sourcing numbers, and in that case the "misrepresentation" headline is itself a misrepresentation! It's misrepresentations themselves that bug me; not whether or not the entities generating the misrepresentations can sneeze.
On the gripping hand, this "recent" study was conducted in December 2024, when reasoning models were still experimental. They don't list version numbers for anything except GPT-4o, but I'm pretty sure 4o didn't enable reasoning and if they were using Gemini's "Deep Research" they'd surely have mentioned that. Results from non-reasoning models are probably still the most apples-to-apples way to think about use cases like the ones in this discussion, that won't want to burn more GPU-seconds than they have to, but at the moment in my experience switching to a reasoning model can make the difference between getting bullshitted (and in the worst models, gaslit about the bullshit) versus actually getting correct and well-sourced answers (or at least admissions of ignorance).
Also in my experience, for things you can't personally verify it's only AI output with sources that can be trusted - not because you can trust it directly, but because you can check the sources yourself. AI can be a much better search engine just by pointing you to the right sources even if you can't always trust it's summary of them. I'd even prefer something that has issues 18%-67% of the time, but helps me fix those issues, over something that only has issues e.g. 10%-15% of the time but leaves me no way to check whether I'm being misled.
Often it's accurate, just not often enough to be strong evidence, much less anything approximating proof, of accuracy. I have no idea why people think otherwise. Even the ones who don't understand that we now train AI rather than program it have experienced computer programs with bugs, right? There is a selection effect to those screenshots, though: if the AI says that 2+2=4, well, nobody wants to argue otherwise so nobody bothers citing that; if the AI says that 2+2=5, then anyone who falls for it has motivation to wave that banner in front of everyone trying to explain otherwise.
That's the old version. If you read the article it links to an updated study done in 2025.
Reasoning models suck ass. Every time I use gpt5 high or gemini 2.5 pro thinking it's a huge waste of time. Wellll for math they're probably fine, because that's specifically what they're optimized for, but I never found them helpful in other areas.
Turns out it links to both! I followed the final "The full findings can be found here: Research Findings: Audience Use and Perceptions of AI Assistants for News" link, which leads to a summary with only two footnotes, one to a general "Digital News Report" web page and one to the Feb 2025 writeup of the 2024 study. I mistakenly assumed these were the full findings, because of the phrase "full findings", so I didn't bother to check the News Integrity in AI Assistants Report link that goes to the newer results.
Thank you!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link