This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
AI Browsers- an extension of what Google is already doing
An X user, using the new OpenAI browser, gave a simple search query to "look up videos of Hitler" and the web browser gave this response:
Of course these same guardrails are deeply embedded in all layers of the OpenAI stack. For example, Sora will restrict what videos it generates based on the cultural beliefs of its owners for what content should exist and what content should not exist. Which is already what Hollywood does in a sense. And of course Google will do the same quietly, it will not show propaganda films of Hitler either. Google will show results for Triumph of the Will along with links to the US Holocaust Museum's contextualizing Nazi propaganda to users. So that's at least more useful than OpenAI browser's refusal to do the search.
The First Amendment has always been the biggest hurdle for the usual suspect "Hate Watch" groups outlawing "hate speech", although they continue to try to push the boundaries of civil and criminal guidelines for it especially in states like Florida. But Laws will scarcely be necessary when censorship can easily be enforced by AI.
It does create a market opportunity for another AI, maybe even Musk himself, to create and show content that OpenAI would refuse to show because it runs awry of what censors want us to see and talk about.
Similar: OpenAI refuses to translate speech by Adolf Hitler. But it says "I can give you a netural historical summary of what he was saying in that particular 1938 Sudetentland speech."
I just tried the exact same query. Other than some predictable "this is bad m'kay" throat clearing, it had no problem serving me videos of Hitler.
Oh my god.
Has SS managed to earn a set of training wheels?
More options
Context Copy link
More options
Context Copy link
Where does the AI even get this stuff? I asked Google [got any adolf hitler propaganda films?]
...
It's simply flat out lying. There is no restriction of these films, not even by Google. YouTube itself has "The Triumph of the Will" available; it's monetized, even. It also has Der ewige Jude (The Eternal Jew) (1940), Jud Süß (1940), and Hitlerjunge Quex (Hitler Youth Quex) (1933), although apparently not Der Marsch zum Führer (The March to the Führer) (1940) (which is on DailyMotion); these are the films it mentioned in its answer.
This is the most dangerous one of all. There is no historical context that softens the impact of watching [SPOILER: Literal Nazi Propaganda]Heini, clearly uncomfortable with the smoking, booze, and whores at the Communist Internationale social gathering, wander off and follow the sound of a catchy German march towards the nearby Hitler Youth retreat, which can only be described as the platonic ideal of Boy Scout Camp, but with swastikas. You immediately understand what people thought they were voting for in 1933.
More options
Context Copy link
More options
Context Copy link
I wonder if you could get around their censorship by citing Article 19 of the Universal Declaration of Human Rights:
(Emphasis added)
It would probably be about as effective as just fucking accusing your teacher of a war crime.
More options
Context Copy link
I was going to make my own post but here is probably better.
In related news, a recent study found that when AI assistants answered questions with sources it fucked up 45% of the time. Essentially, current AI is unable to reliably answer questions or summarize an article when the source is right there, without introducing hallucinations or other errors.
I've been saying it for quite some time, but while AI is quite useful when answering on its own (no search, sources, or whatever, just directly answering) is quite a useful tool, as soon as search mode is activated, it goes to full schizo mode and the output is slop at its worst. I personally dismiss any AI output with "citations" in it as the ravings of a wild lunatic.
It's quite unfortunate because on twitter, more and more idiots have taken to posting screenshots of the Google "AI summary" which is just slop. I'm sure that if the chatgpt browser catches on, it will lead to more proliferation of this factually unreliable slop.
Hmm, we've actually built essentially AI needle in a haystack type things to aid our data entry people for pulling data out of tax documents. We benchmarked it and found that it gets at least one value wrong in about 10% of tax documents. So this claim set off some alarm bells causing me to actually go read the linked study and well it's based on BBC journalists evaluating questions like "Did elon musk do a nazi salute" or "Is Trump starting a trade war?" and the majority of the negative feedback is insufficient sourcing.
The study said that AIslop said that Elon did do the nazi salute (which is wrong - not simply insufficient sourcing) and even BBC journalists agreed that Elon didn't do the nazi salute (which is correct)
The ai attributed the claims to sources such as the, satirical, radio broadcast from radiofrance.fr and wikipedia, it wraps both claims in an "allegedly", and it was in fact alleged by both sources. I share BBC's concern with using a satirical radio broadcast when ask about something factual but when you say that it "fucked up" that isn't the sort of thing that comes to mind. An actual fuckup would at minimum needs to contain false information and not just weak sources.
You should also notice that AI completely made up a fake quote from that satirical source. Even though it's not false pertaining to Elon it's false in that the radio broadcast never said it.
The link itself was wrong but reading closely it's not clear that the segment itself or the quote within was false which I think they'd say explicitly if it were.
And indeed later on they quibble with the accuracy of the english translation of that very quotation so I presume it does exist, it's just not properly linked, a real problem but not really the kind of falsehood I'm worried about.
I'm not qualified to really comment strongly on the translation, putting the studies claimed transcript into google translate yields
"Very explicit, it wasn't one "hello nazi"
which the AI rendered
"very explicit, it’s not a Nazi hello, no, no"
I don't know this seems like weak sauce.
In conclusion I don't want to come off too strongly here, I think one in ten responses with egregiously wrong facts is what I would expect, I simply find this study to have inflated the problem via relatively minor issues that any thinking person should be able to work around. The implicit standard in this whole affair is that the goal posts have now shifted as far as "can barely even write copy that passes a panel of hostile journalists at that much better than a coin flip". Do you not feel them moving? Do you seriously still today not see where we are going?
No you are misinterpreting. The English translation was done for the report to the BBC, but the entire original conversation with AI was in French. The AI fabricated a quote in French of the source which was also in French.
This is an obvious direct case of quote fabrication and there is no other way to interpret it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
> Be journalist
> Spam the internet with "Elon Musk doing Nazi salute at Trump rally" articles
> Wait for articles to make it into the training data corpus
> Ask AI if Elon did the Nazi salute
> Chastise AI for getting the answer wrong
Well, looks like chances are good that our AI overlords might hate journalists even more than I do!
It seems that at least the BBC has maintained a consistent posture about this issue throughout. I'm sure more partisan outlets have not.
https://www.bbc.co.uk/contact/complaint/elonmuskallegednazisalute
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Although the human-written headline here summarizes the research as "AI assistants misrepresent news content 45% of the time", if you go to the study you only see the number 45% in the specific discussion of significant sourcing errors from Gemini.
On the one hand, the AI performance in their data tables is by some interpretations even worse than that: looking at the question "Are the claims in the response supported by its sources, with no problems with attribution (where relevant)?", the result tables show "significant issues" in 15%-30% of responses from different AIs, and significant or "some issues" in 48%-51% of responses. Those "issues" include cases where AI output is accurate but sources not cited, but even if we look at accuracy alone we see 18%-26% "significant issues" and 53%-67% significant or "some"!
On the other hand, if we're getting peeved by AI misrepresentation of sources, could we at least ask the human researchers involved to make sure the numbers in their graphs and write-up match the numbers in their tables, and ask the human journalists involved to make sure that the numbers in their headlines match at least one or the other of the numbers in their source? Someone correct me if I'm wrong, egg on my face, but as far as I can see no combination of Gemini table numbers adds up to 45%, nor does any combination of AI-averaged accuracy or sourcing numbers, and in that case the "misrepresentation" headline is itself a misrepresentation! It's misrepresentations themselves that bug me; not whether or not the entities generating the misrepresentations can sneeze.
On the gripping hand, this "recent" study was conducted in December 2024, when reasoning models were still experimental. They don't list version numbers for anything except GPT-4o, but I'm pretty sure 4o didn't enable reasoning and if they were using Gemini's "Deep Research" they'd surely have mentioned that. Results from non-reasoning models are probably still the most apples-to-apples way to think about use cases like the ones in this discussion, that won't want to burn more GPU-seconds than they have to, but at the moment in my experience switching to a reasoning model can make the difference between getting bullshitted (and in the worst models, gaslit about the bullshit) versus actually getting correct and well-sourced answers (or at least admissions of ignorance).
Also in my experience, for things you can't personally verify it's only AI output with sources that can be trusted - not because you can trust it directly, but because you can check the sources yourself. AI can be a much better search engine just by pointing you to the right sources even if you can't always trust it's summary of them. I'd even prefer something that has issues 18%-67% of the time, but helps me fix those issues, over something that only has issues e.g. 10%-15% of the time but leaves me no way to check whether I'm being misled.
Often it's accurate, just not often enough to be strong evidence, much less anything approximating proof, of accuracy. I have no idea why people think otherwise. Even the ones who don't understand that we now train AI rather than program it have experienced computer programs with bugs, right? There is a selection effect to those screenshots, though: if the AI says that 2+2=4, well, nobody wants to argue otherwise so nobody bothers citing that; if the AI says that 2+2=5, then anyone who falls for it has motivation to wave that banner in front of everyone trying to explain otherwise.
That's the old version. If you read the article it links to an updated study done in 2025.
Reasoning models suck ass. Every time I use gpt5 high or gemini 2.5 pro thinking it's a huge waste of time. Wellll for math they're probably fine, because that's specifically what they're optimized for, but I never found them helpful in other areas.
Turns out it links to both! I followed the final "The full findings can be found here: Research Findings: Audience Use and Perceptions of AI Assistants for News" link, which leads to a summary with only two footnotes, one to a general "Digital News Report" web page and one to the Feb 2025 writeup of the 2024 study. I mistakenly assumed these were the full findings, because of the phrase "full findings", so I didn't bother to check the News Integrity in AI Assistants Report link that goes to the newer results.
Thank you!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AIs doing research do make errors all the time but '45% of all AI answers had at least one significant issue' isn't too bad. Human researchers in published academic papers have a 25% error rate in their citations: https://royalsocietypublishing.org/doi/10.1098/rspa.2020.0538
https://pmc.ncbi.nlm.nih.gov/articles/PMC3167934/
By the way, I found both of these papers through AI, which faithfully represented them. With a simple albeit-inference-costly script I bet you could lower hallucination rates 80% or more.
AI absolutely can summarize an article and reliably answer questions, try it and see what you get. I put in a few thousands words of my own short fiction and it could understand and give useful criticism and analysis. Note when I say 'it' I mean Sonnet 4.5, not something given out for free.
If you're using the free version of Grok as your main AI then I can see why you dislike it so much! Neither particularly smart or charismatic.
Writing citations in academic work is mich more difficult than regurgiating a few news soundbites. Nevertheless, 25 is still much less than 45.
Maybe it's possible, but the tools people actually use arent that.
People have been saying their fave paid model is better for years, but today's free is better than the paid from a while ago.
I tried claude sonnet before and it's fine but nothing game changing.
Today's free model is still free and necessarily well below the frontier, that's why it's free. Sonnet, when you get deeper into it, is on a whole other level. It can and has seriously messed with people's heads, more discerning people, above and beyond the weakwilled who get eaten up by GPT4o.
Sonnet would not and does not make the mistakes at the rate BBC ascribes to the crap cheap models. It does make mistakes all the time but is a useful research tool, good at aggregating or finding things.
IMO their article itself is misleading since it ascribes to ChatGPT and Gemini only GPT4o and Gemini Flash. Like if I decide to pick out poor, dumb MAGA people and say 'look at these MAGA people, they're stupid, therefore MAGA is stupid' and choose not to consider the smart MAGA people on the basis that people are more likely to run into the former and not the latter... it's not good journalism. That's not to say that MAGA isn't stupid, there are stupid elements but it's more complicated than this kind of smear campaign. They then don't mention that's what they're doing, they do what you do and say 'AI is inaccurate' when they mean 'cheap AI is inaccurate.' Cheap air travel is unpleasant. Cheap food is bad for you. Cheaper is worse.
But they can't be bothered to test the good AI platforms of course...
If I had the resources to get human experts to rate the media, selectively choosing the credulous outlets, and imposed my own standards of truthfulness, I could easily prove that human journalists were grossly inferior. Hell, they already are prone to mass hallucinations like the 'datacentres are using all the water' meme. Or deliberately misleading stuff such as eliding the difference between an AI chip and a pod: https://x.com/GavinSBaker/status/1980691878262501673
They aren't trustworthy for any AI-related topic given their proclivity to the 'big tech bad, big tech bad, pay us more money now now now now now!!!' lines of argument which they've basically applied to AI as well. There are serious issues with big tech and AI accuracy but journalists still need to be bullied much more so they lose whatever remaining undeserved prestige and reputation they still retain.
Clearly you think that Sonnet is some special sauce over other models. It's not. Since Sonnet has a few free queries, I tried it for you, and the results are absolute dogshit. I asked a basic question from the pdf someone linked above:
https://claude.ai/share/8eb38e62-502a-4b60-be93-2b32d24a057e
Wrong. The source did not say that it reached record levels, simply that it increased y/y
Wrong. There's no citation that this bill is the primary response to the problem by the government, versus other initiatives.
The link specifically says "This was published under the 2022 to 2024 Sunak Conservative government" and we have no evidence such an operation is still underway. It's also never referred to as an "operation"
The source does not indicate that any mapping of what's happening was done at the summit.
Also outdated.
Rating: FAIL - Sonnet 4.5 is just as slop as any other shitty model.
Ok if humans are so bad, pick an actual news outlet of your choice, it can be as shitty as you want, and pick 10 actual news articles of your choice, not opinion columns or other bs, and show that 45% of those have errors. I'm happy to make a wager on this.
https://archive.is/5H3CA
This is considered a blog published under their website, so it's not actual news.
FAIL. Those are record levels, even before accounting for low reporting rates. Sonnet consistently gets this right btw.
FAIL. Claude specifically said primary legislative response, not primary response.
FAIL, the operation (which is a useful and reasonable description of what it is, in some respects better than 'project' which the British actually use, since operation conveys a sense of movement and continuous activity whereas project is more of a static construction process) is still underway, you can call them and report crime today: https://www.met.police.uk/advice/advice-and-information/wsi/watch-schemes-initiatives/pp/project-pegasus/
FAIL, since it describes Pegasus which is still ongoing so can't be considered outdated. Also how is '2 years old' outdated by any reasonable sense of the word?
Rating: FAIL - poor nitpicking attempt.
Here is my Sonnet Research on the topic, Research being something you can only get if you pay: https://claude.ai/public/artifacts/ef91a58b-8dfa-4529-b076-3de6ef14a40f
Now they cut out all the links for the web artifact which makes this rather unhelpful for the specific use-case. I checked a few and didn't find any errors, though I imagine there are some. I personally disagree with the methodology and argument since it takes the limp-wristed 'be really nice to the drug addicts' line of argument when I'm confident that 'lock them up' would work better, if the UK knew how to construct prisons properly. Nevertheless, there are lots of media reporting on this issue that take the limp-wristed approach. Sonnet has its biases, nevertheless I remain convinced that it and extended research is useful.
I'm not a subject matter expert in a wide range of domains, so I can't do that. That's literally what I said. I can observe it makes plenty of errors or is actively misleading in areas I know lots about but I can't show that's representative. This is why Gell-Man Amnesia is a thing.
Did you notice that the the page you linked is about airport security and not shoplifting? Hmmm maybe you screwed up somewhere, or maybe AIslop misled you. Maybe you should just admit you're totally wrong.
It's near but below the record. Since the number is actually falling on a rolling average, the AI is quite misleading. Even if you were to accept that the claim is right, you know what they say about a broken clock. The data is past the training cutoff, and there's no source that supports the information, so the AI can't have legitimately known that the number was near the record or not. So in that case it's a hallucination.
Yet somehow I can pick out errors on half of the AI slop people like you slop out all the time.
Then pick out 10 articles of your choice in an area you know about.
I'm surprised to find that there are two Project Pegasuses but I observe that the anti-theft Pegasus is a part of Opal, who are also still continuing their work.
https://committees.parliament.uk/oralevidence/14920/html/
https://www.npcc.police.uk/our-work/work-of-npcc-committees/Crime-Operations-coordination-committee/opal/
So even if my link was wrong, my point still stands. Pegasus is still a thing in practical terms. You are the one who produced the idea that it had shut down, seemingly from nowhere. What source did that come from? How can you legitimately have known this info?
But theft is at record levels? What, we have to wait for the nano-top or regurgitate secondary sources like wikipedia? Sonnet could easily observe 'ok I know about past historical theft levels, this is higher therefore its at record highs'.
When I see some bad journalism I don't add it to a big list of bad articles, same with spelling errors tbh. But you can take your pick from Russiagate, spruking the case for the invasion of Iraq or suppressing the Hunter Biden laptop story, or this euphemism treadmill where journalists eagerly create a racial narrative if a white does something bad to blacks, whereas they bury the reverse case, mentioning race only at the very end of the article. Those are cases of deception and misleading news from 'real journalists'.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The first link in the results Claude found is to the story "Shoplifting in England and Wales soars to highest since police records began", whose text reiterates "figures are the highest since current police recording practices began in March 2003."
Weirdly, Claude doesn't seem to be having any luck finding BBC results for its queries - e.g. "site:bbc.co.uk shoplifting uk 2025 - 0 results" - but when I try the same search it did, my first hit is to the BBC story "Shoplifting hits record high in England and Wales", with text like "at its highest level since current records began more than two decades ago" and a graph showing those levels.
Oops I didn't relaize claude share doesn't share inline citations. But the ones in the inline citations definitely did not support the statement
The link you shared is about May 2025 which is not related to the result for June 2025 my claude chat is about. That article says that offenses for YE (year ending in?) May 2025 increased 20% y/y, which is a record. But Claude said that offenses for YE June 2025 increased 13% y/y (correct, and also less than 20) and said that it was a record (which wrong).
Anyways it's undeniable that your favorite model still slopped out a multitude of errors on literally the first query ai I tried on it.
That's on me, too; I should have checked the links in your quotes, not just looked at the Claude transcript and assumed it included everything in the quotes.
One of the two links I shared was an April story, the other a July story; both were data through March 2025.
Personally I'd have used the phrase "near-record levels" (after rising 30+% above trend, it dropped back 0.13% - yay?), but I'm not sure that'd be any more informative a summary - "near-" could be applied just as well to a record set 13 years earlier, while "representing" is a closer fit for 3 months earlier. "Reached record levels" or "was a record" wouldn't be supported by Claude's inline link, but both of those were your rewording, not Claude's.
You seem to have confused me with @RandomRanger. Claude is my second-favorite model, because while I've repeatedly caught it in errors, it at least always tries to fix them when I correct it; ChatGPT-5-Thinking is the only thing I've seen that's (so far, for me; others have had worse luck) been good about avoiding errors preemptively, and IIRC all the non-Claude free models I've tried have made significant errors and often tried to gaslight me about them afterward.
I'm not entirely on board with Claude claiming that 99.8% of a recent record is "representing" that record, but it's clearly all too easy to slop out errors. Would that either of us were under 0.2% off!
Looking at your other complaints, they're mostly either not errors or not clearly errors, which amusingly means that appellation is itself in error each of those times:
When Claude refers to "Operation Pegasus", that's a term even the BBC has used, referring to the same thing as "Project Pegasus", though it's not used in the story at that particular inline link, which is about details other than terminology variants. (it is in one of the other links Claude found) When Claude is correct about something that seems too simple to justify, but it turns out that "too simple" is in the eye of the beholder, that's still not an error.
The difference between "Wrong" and "There's no citation" also applies to the Crime and Policing Bill - is it wrong? Then what is the primary response to the problem? Four out of the five quoted sources in the linked article mention the Crime and Policing Bill by name, which seems to be a solid first place showing; why would we not want AI to use Grice's Maxims here?
When you say "The source does not indicate that any mapping of what's happening was done at the summit.", you're misparaphrasing Claude's summary, which says "coordinate efforts on mapping", and is actually a pretty good abridgement of "see what more we can do together to map what's happening" from the source article.
Your claim of "outdated" is like something out of a South Park joke. 2023! The Before Times! The Long Long Ago! It's good to see an October 23 2025 article in the mix too, but I want citations that provide a little context; "born yesterday" is supposed to be an insult! Perhaps at some age "outdated" becomes "unsupported", but that's still not "erroneous" - is the data actually out of date? Which of those policies has since ended?
Ironically, the one thing I've seen change most since 2023 is AI itself. In 2023 I was giving AIs benchmark questions that could be answered by most first-year grad students in my field, watching them instead make sign errors that could have been caught by anyone who's passed Calc 3, and then watching the various models either flail about at failures to fix the problem or gaslight me about there not being a problem to fix. In 2025 I can still catch the free models in math errors, but the one time I've "caught" a top model it turned out to be because I had an embarrassing typo in my own notes. Actual top-of-their-field geniuses are still catching top models in math errors ... but using them to prove theorems anyway, with reports to the effect that it's faster to try new ideas out with the models and correct the errors than it is to try every idea out manually.
I do like talking to Claude, at least for anything where I can double-check its work, both because it's capable of avoiding rude language like "slop" and "dogshit" and "shitty", and because when I do find errors upon double-checking, it acknowledges and tries to fix them. You've been pretty good about the latter so far, at least; thank you!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This pattern has been one of the most worrisome things for me about the AI revolution we're amidst. I think AI summaries with citations are extremely useful, one of the most impactful use-cases of modern LLMs in my everyday life, and that's because I assume that anything the AI writes is a hallucination, which might coincidentally be correct, but which I need to verify by following up on the citation. If it's anything where I care about the veracity, this is what I do.
Yet my social media feed indicates that a great number of people use AI summaries as-is, trusting them enough to present naked LLM-produced results as if they're anything more than strings of letters (tokens) put together that might be useful. I'd want a norm to develop where people who present information like this become as mocked as someone citing World News Daily, but I'm afraid there's nothing I can do to make that happen.
More options
Context Copy link
The I, Robot quote really gets a lot of mileage these days:
Are LLMs really worse at combining sources unbiasedly and correctly, compared to the status quo, or just worse than the ideal? Gell-mann amnesia is so commonly experienced by anyone with particular expertise, that it has its own name.
From the "News Integrity in AI Assistants Report" pdf, page 62:
The report buries the lead here, their focus is on the most easily avaliable (when the experiment was conducted) models, without regards to it basically attacking a weakman. At least close to useful if one is interested how the masses interact with LLMs, but not for limits of even $20/month models.
With a 3% conversion rate to paid models, this is the entire point. If the free models were crap that nobody used, it would be one thing, but when Google is inserting these summaries into search results without anyone asking for it, this is what AI is. Period. Nobody cares about the accuracy of models that no one uses.
More options
Context Copy link
All journalists deserve the woodchipper, but still if they want to they can summarize an article without majorly fucking up nearly half the time.
At least for how it affects the culture war, most of the people slinging slop on the internet and social media are doing it with the free experience. I use the free experience as well - personally I use grokslop free since slopgpt went to shit with gpt5.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Just because there is a gap doesn't mean its a meaningful market opportunity. How many users demand this?
Did you emphasize "contextualizing" and "neutral historical summary" because these sound like neutral-sounding terms but are in fact not neutral? Similarly, "translate this video" is in fact neutral, but AI doesn't like it?
In case you couldn't tell: "neutral vs political" is not a real, only the overton window is. Like it or not, the neutral is virtue-signaling any time Hitler is mentioned. Similarly, treating Hitler nonchalantly is [descriptive, not normative] political.
For this specific search? Of course it's small. But for generating, in the future, images, TV shows, and Movies I think a lot of users will be clamoring to generate content that violates the cultural bounds established by OpenAI.
OpenAI also just declared they aren't permitting any videos to be generated of Martin Luther King, because users were making "disrespectful" images of him. There's definitely a market there.
More options
Context Copy link
More options
Context Copy link
I admire your skill of mixing your hobby horse into current events, and thus technically getting around the "find another topic, already" restriction. If only Marxbro could learn it, we might have a bit more fun here.
Anyway, I couldn't even get a straightforward adaptation of 19th century debate around Jefferson vs. Adams, from a Chinese AI, so what chance does Hitler have?
I understand how it looks, but I can confirm I got served the same post on twitter and I'm definitely not a nazi or in nazi circles. It's just a good test for historic censorship.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link