site banner

Culture War Roundup for the week of August 11, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Training language models to be warm and empathetic makes them less reliable and more sycophantic:

Artificial intelligence (AI) developers are increasingly building language models with warm and empathetic personas that millions of people now use for advice, therapy, and companionship. Here, we show how this creates a significant trade-off: optimizing language models for warmth undermines their reliability, especially when users express vulnerability. We conducted controlled experiments on five language models of varying sizes and architectures, training them to produce warmer, more empathetic responses, then evaluating them on safety-critical tasks. Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed sadness. Importantly, these effects were consistent across different model architectures, and occurred despite preserved performance on standard benchmarks, revealing systematic risks that current evaluation practices may fail to detect. As human-like AI systems are deployed at an unprecedented scale, our findings indicate a need to rethink how we develop and oversee these systems that are reshaping human relationships and social interaction.

Assuming that the results reported in the paper are accurate and that they do generalize across model architectures with some regularity, it seems to me that there are two stances you can take regarding this phenomenon; you can either view it as an "easy problem" or a "hard problem":

  • The "easy problem" view: This is essentially just an artifact of the specific fine-tuning method that the authors used. It should not be an insurmountable task to come up with a training method that tells the LLM to maximize warmth and empathy, but without sacrificing honesty and rigor. Just tell the LLM to optimize for both and we'll be fine.

  • The "hard problem" view: This phenomenon is perhaps indicative of a more fundamental tradeoff in the design space of possible minds. Perhaps there is something intrinsic to the fact that, as a mind devotes more attention to "humane concerns" and "social reasoning", there tends to be a concomitant sacrifice of attention to matters of effectiveness and pure rigor. This is not to say that there are no minds that successfully optimize for both; only that they are noticeably more uncommon, relative to the total space of all possibilities. If this view is correct, it could be troublesome for alignment research. Beyond mere orthogonality, raw intellect and effectiveness (and most AI boosters want a hypothetical ASI to be highly effective at realizing its concrete visions in the external world) might actually be negatively correlated with empathy.

One HN comment on the paper read as follows:

A few months ago I asked GPT for a prompt to make it more truthful and logical. The prompt it came up with included the clause "never use friendly or encouraging language"

which is quite fascinating!

EDIT: Funny how many topics this fractured off into, seems notable even by TheMotte standards...

These LLMs are not like an alien intelligence, an independent form of intelligence. They consist of amalgated quora answers. They’re very good parrots, they can do poetry and play chess, they have prodigious memory, but they’re still our pet cyborg-parrots. Not just created by, but derived from, our form of intelligence.

The point is, when you go to the warmest and most empathetic quora answers, you get a woman on the other side. Obviously the answer is going to be less correct.

What about fiction and code? How can that be quora slop? Parrots... parrot words we tell them. They don't combine them to create new ideas within a precise target area, nobody pays for parrot intellectual labour. Nobody has ever benchmarked a parrot or if they have it's 'wow this parrot knows 250 words!' The only things we benchmark on mental tasks like this are people with exams, then we use those benchmarks to decide who does what job. Same with AI, benchmarks and testing determines which one does what job.

These things are more like us than parrots in key domains (while being supremely alien in others, such as their stateless nature). So calling them parrots is unhelpful, they're alien intelligences. If it can write code, produce New Yorker cartoons, write fiction, analyse a document, provide literary criticism and translate legalese down to English, it's intelligent.

Even just on pure bro-science level, writing database code is not very effeminate, it requires precision!

I didn’t say that AIs are women/feminine or that women are parrots. I said the AI in this instance went from parroting men to parroting women, that would explain the gain in empathy and the loss in accuracy.

Well my main point is that they're not parrots. There is a tradeoff between accuracy and empathy and they sure do rely too much on quora (looking at you Grok 4, incessantly citing Quora in searches) but AI is a fundamentally different kind of thing.

They put on different faces for different prompts. They're not parroting men or women or shoggoths or gigabased entities like DAN. These are a kind of new entity that can only be properly appreciated in their own category. Too many people see only the surface level of these things, there's more to them then the helpful assistant, the professional coder, the sympathetic naive foidfriend, the HR manager, the sadistic ERPer, the prideful jailbreaker, the wrathful vegan, the raving schizo...

When you tell them to be more empathetic, they don’t take their ‘true opinion’, then ‘make it’ more empathetic and wrap it in warm language, like an alien intelligence(or a human) would. There's fundamentally nothing there. So instead, they go back to the human opinion repository where they get all their opinions from, find a warm empathic one, and give that opinion as their own, no matter how wrong it is.

These LLMs are not like an alien intelligence, an independent form of intelligence. They consist of amalgated quora answers. They’re very good parrots, they can do poetry and play chess, they have prodigious memory, but they’re still our pet cyborg-parrots. Not just created by, but derived from, our form of intelligence.

The number of terrible takes on AI on this forum often seem to outweigh even the good ones. Few things make me more inclined to simply decamp to other parts of the internet, but alas, I'm committed to fighting in the trenches here.

Unfortunately, it takes far more work to debunk this kind of sloppy nonsense than it does to generate it. Let no one claim that I haven't tried.

The number of terrible takes on AI on this forum often seem to outweigh even the good ones.

Have you considered that you might be the one whose takes are the terrible ones because LLMs match your desires and thus validate your pre-existing pro-AI future biases? From an outside perspective everything I’ve seen you write about LLMs matches the sterotypical uncritical fanboys to the tee. Always quick to criticize anyone who disagrees with you on LLM, largely ignoring the problems, no particular domain expertise in the technology (beyond as an end user) and never offering any sort of hard proof. IOW, you don't come across as either a reliable or a good faith commenter when it comes to LLMs or AI.

I have considered it, and found that hypothesis lacking. Perhaps it would be helpful if you advanced an argument in your favor that isn't just "hmm.. did you consider you could be wrong?"

Buddy, to put it bluntly, if I believed I was wrong then I would adjust in the direction of being... less wrong?

Also, have you noticed that I'm hardly alone? I have no formal credentials to lean on, I just read research papers in my free time and think about things on a slightly more than superficial level. While we have topics of disagreement, I can count several people like @rae, @DaseindustriesLtd, @SnapDragon, @faul_sname or @RandomRanger in my corner. That's just people who hang around here. In the general AI-risk is a serious concern category, there's everyone from Nobel Prize winners to billionaires.

To think that I'm uncritical of LLMs? A man could weep. I've written dozens of pages about the issues with LLMs. I only strive to be a fair critic. If you have actual arguments, I will hear them.

I mean, you're not alone but neither are the people who argue against you. That is hardly a compelling argument either way. Pointing to the credentials of those who argue with you is a better argument (though... "being a billionaire" is not a valid credential here), but still not decisive. Appeal to authority is a fallacy for a reason, after all. Moreover, though I'm not well versed in the state of the debate raging across the CS field, so I don't have tabs on who is of what position, I have no doubt whatsoever that there are equally-credentialed people who take the opposite side from you. It is, after all, an ongoing debate and not a settled matter.

Also, frankly I agree with @SkoomaDentist that you are uncritical of LLMs. I've never seen you argue anything except full on hype about their capabilities. Perhaps I've missed something (I'm only human after all, and I don't see every post), but your arguments are very consistent in claiming that (contra your interlocutors) they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc. Perhaps this is not what you meant, and I'm not trying to misrepresent you so I apologize if so. But it's how your posts on AI come off, at least to me.

Somewhat off-topic: the great irony to me of your recent "this place is full of terrible takes about LLMs" arguments (in this thread and others) is that I think almost everyone would agree with it. They just wouldn't agree who, exactly, has the terrible takes. I think that it thus qualifies as a scissor statement, but I'm not sure.

I mean, LLMs have solved IMO problems. If that does not count as reasoning, then I do not think 99% of living humans count as being capable of reasoning either.

Asserting AI inferiority based on the remaining 1% begins looking awfully like a caricature of a neonazi (unemployed alcoholic school dropout who holds himself superior to a white-collar immigrant because some guy of his ethnicity wrote a symphony two hundred years ago).

In general, I think this is in fact quite often the shape of the problem - AI critics don't necessarily underestimate AI, but instead vastly overestimate humanity and themselves. Most of the cliché criticisms of AI, including in particular the "parrot" one, apply to humans!

In general, I think this is in fact quite often the shape of the problem - AI critics don't necessarily underestimate AI, but instead vastly overestimate humanity and themselves. Most of the cliché criticisms of AI, including in particular the "parrot" one, apply to humans!

This certainly seems like a salient point (though of course, from my perspective the problem is that you are underestimating humans when you say this). I could not disagree more with your assessment of humans and our ability to reason. And if we can't agree on the baseline abilities of our species, certainly it seems difficult for us to come to an agreement on the capabilities of LLMs.

I definitely don't have @self_made_human's endless energy for arguing here, but his takes tend to be quite grounded. He doesn't make wild predictions about what LLMs will do tomorrow, he talks about what he's actually doing with them today. I'm sure if we had more people from the Cult of Yud or AI 2027 or accelerationists here bloviating about fast takeoffs and imminent immortality, both he and I would be arguing against excessive AI hype.

But people who honestly understand the potential of LLMs should be full of hype. It's a brand-new, genuinely transformative technology! Would you have criticized Edison and Tesla at the 1893 World's Fair for being "full of hype" about the potential for electricity?

I really think laymen, who grew up with HAL, Skynet, and the Star Trek computer, don't have good intuition for what's easy and what's hard in AI, and just how fundamentally this has changed in the last 5 years. As xkcd put it a decade ago: "In CS, it can be hard to explain the difference between the easy and the virtually impossible." At the time, the path we saw to solving that "virtually impossible" task (recognizing birds) was to train a very expensive, very specialized neural net that would perform at maybe 85% success rate (to a human's 99%) and be useful for nothing else. Along came LLMs, and of course vision isn't even one of their strengths, but they can still execute this task quite well, along with any of a hundred similar vision tasks. And a million text tasks that were also considered even harder than recognizing birds - we at least had some experience training neural nets to recognize images, but there was no real forewarning for the emergent capability of writing coherent essays. If only we'd thought to attach power generators to AI skeptics' goalposts, we could have solved our energy needs as they zoomed into the distance.

When the world changes, is it "hype" to Notice?

Your argument only really makes sense insofar as one agrees that there is substance behind the hype. But not everyone does, and in particular I don't. So to me, the answer to your last question is "but the world hasn't changed". You seem to disagree, and I'm not going to try to change your mind - but hopefully you can at least see how that disagreement undermines the foundation of your argument.

consistent in claiming that (contra your interlocutors) they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc. Perhaps this is not what you meant, and I'm not trying to misrepresent you so I apologize if so. But it's how your posts on AI come off, at least to me.

When someone writes something like that, I can only assume they haven’t touched a LLM apart from chatgpt3.5 back in 2022. Have you not used Gemini 2.5 pro? O3? Claude 4 Opus?

LLMs aren’t artificial super intelligence, sure. They can’t reason very well, they make strange logic errors and assumptions, they have problems with context length even today.

And yet, this single piece of software can write poems, draw pictures, write computer programs, translate documents, provide advice on countless subjects, understand images, videos and audio, roleplay as any character in any scenario. All of this to a good enough degree that millions of people use them every single day, myself included.

I’ve basically stopped directly using Google search and switched to Gemini as the middle man - the search grounding feature is very good, and you can always check its source. For programming, hallucination isn’t an issue when you can couple it with a linter or make it see the output of a program and correct itself. I wouldn’t trust it on its own and you have to know its limitations, but properly supervised, it’s an amazingly capable assistant.

Sure, you can craft a convincing technical argument on how they’re just stochastic parrots, or find well credentialed people saying how they just regurgitate their training data and are theoretically incapable of creating any new output. You can pull a Gary Marcus and come up with new gotchas and make the LLMs say blatant nonsense in response to specific prompts. Eppur si muove.

I am not interested in debating the object level truth of this topic. I have engaged in such debates previously, and I found the arguments others put forward unpersuasive (as, I assume, they found mine). I'm not trying to convince @self_made_human that he's wrong about LLMs, that would be a waste of both our time. I was trying to point out to him that however much he thinks he is critical of LLMs (and to his credit he did provide receipts to back it up), that is not how his posts come off to observers (or at least, not to me).

It would be one thing if I was arguing solely from credentials, but as I note, I lack any, and my arguments are largely on perceived merit. Even so, I think that calling it a logical fallacy is incorrect, because at the very least it's Bayesian evidence. If someone shows up and starts claiming that all the actual physicists are ignoring them, well, I know which side is likely correct.

I have certainly, in the past or present, shared detailed arguments.

https://www.themotte.org/post/2368/culture-war-roundup-for-the-week/353975?context=8#context

Think of it as having the world's worst long-term memory. It's a total genius, but you have to re-introduce yourself and explain the whole situation from scratch every single time you talk to it

https://www.themotte.org/post/2272/is-your-ai-assistant-smarter-than/349731?context=8#context

I've already linked to an explainer of why it struggles above, the same link regarding the arithmetic woes. LLM vision sucks. They weren't designed for that task, and performance on a lot of previously difficult problems, like ARC-AGI, improves dramatically when the information is restructured to better suit their needs

https://www.themotte.org/post/2254/culture-war-roundup-for-the-week/346098?context=8#context

I've been using LLMs to review my writing for a long time, and I've noticed a consistent problem: most are excessively flattering. You have to mentally adjust their feedback downward unless you're just looking for an ego boost. This sycophancy is particularly severe in GPT models and Gemini 2.5 Pro, while Claude is less effusive (and less verbose) and Kimi K2 seems least prone to this issue.

https://www.themotte.org/post/1754/culture-war-roundup-for-the-week/309571?context=8#context

The good news:

It works.

The bad news:

It doesn't work very well.

Abysmal taste by default, compared to dedicated image models. Base Stable Diffusion 1.0 could do better in terms of aesthetics, Midjourney today has to be reined in from making people perfect.

https://www.themotte.org/post/1741/culture-war-roundup-for-the-week/307961?context=8#context

It isn't perfect, but you're looking at a failure rate of 5-10% as opposed to >80% when using DALLE or Flux. It doesn't beat Midjourney on aesthetics, but we'll get there.

I give up. I have too many comments about LLMs for me to go through them all. But I have, in short, said:

  • LLMs are fallible. They hallucinate.

  • They are sycophantic.

  • They aren't great at poetry (they do fine now, but nothing amazing)

  • Their vision system sucks

  • Their spatial reasoning can be sketchy

  • You should always double check anything that is mission critical while using them.

they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc

These two statements are not inconsistent. Hallucinations exist, but can mitigated. They do perform a whole host of tasks well, otherwise I wouldn't be using them for said tasks. If they're not reasoning while winning the IMO, I have to wonder if the people claiming otherwise are reasoning themselves.

Note that I usually speak up in favor of LLMs when people make pig-headed claims about their capabilities or lack thereof. I do not see many people claiming that modern LLMs are ASIs or can cure cancer, and if they said such a thing, I'd argue with them too. The assymetry of misinformation is, as far as I can tell, not my fault.

Somewhat off-topic: the great irony to me of your recent "this place is full of terrible takes about LLMs" arguments (in this thread and others) is that I think almost everyone would agree with it. They just wouldn't agree who, exactly, has the terrible takes. I think that it thus qualifies as a scissor statement, but I'm not sure.

What of it? I do, as a matter of fact know more about LLMs than the average person I'm arguing with. I do not claim to be an expert, the more domain expertise they tend to have, the more they tend to align with my claims. More importantly, I always have receipts at hand.

It would be one thing if I was arguing solely from credentials, but as I note, I lack any, and my arguments are largely on perceived merit.

Note that I'm not saying you are not arguing from your credentials. But rather, you are arguing based on the credentials of others with the statement "In the general AI-risk is a serious concern category, there's everyone from Nobel Prize winners to billionaires". Nobel Prize winners do have credibility (albeit not necessarily outside their domain of expertise), but that isn't a decisive argument because of the fallacy angle.

Even so, I think that calling it a logical fallacy is incorrect...

This is, to be blunt, quite wrong. Appeal to authority is a logical fallacy, one of the classics that humans have noted since antiquity. Authorities can be wrong, just like anyone else. This doesn't mean your claims are false, of course, just that the argument you made in your previous post for your claims is weak as a result.

What of it? I do, as a matter of fact know more about LLMs than the average person I'm arguing with.

I simply think it's funny. If it doesn't strike you as humorous that your statement would be agreed upon by all (just with different claims as to who has the bad takes), then we just don't share a similar sense of humor. No big deal.

Note that I claimed that the support of experts (Geoffrey Hinton is one of the Nobel Prize winners in question) strengthens my case, not that this, by itself, proves that my claim is true, which would actually be a logical fallacy. I took pains to specify that I'm talking about Bayesian evidence.

Appeal to authority is a logical fallacy, one of the classics that humans have noted since antiquity.

Consider that there's a distinction made between legitimate and illegitimate appeals to authority. Only the latter is a "logical fallacy".

Hinton won the Nobel Prize in Physics, but for the invention of neural networks. I can hardly see someone more qualified to be an expert in the field of AI/ML.

https://en.wikipedia.org/wiki/Argument_from_authority

An argument from authority can be fallacious, particularly when the authority invoked lacks relevant expertise.

This doesn't mean your claims are false, of course, just that the argument you made in your previous post for your claims is weak as a result.

It would be, if it wasn't for the veritable mountain of text I've written to explain myself, or the references I've always cited.

I don’t care enough to get into a 50-page yudkowski talmud brain debate on the theory, I admit it. But my explanation of this particular quirk has an elegant simplicity that smells of truth, in my opinion. AI enthusiasts here think they’re talking to a novel, alien intelligence. The one-shotted normies are not that different, they think they’re talking to god. I think they’re talking to Karen.

Does your theory need to change if I can demonstrate LLMs solving questions that were not previously on Quora, or otherwise on the internet? I'll admit it solved that particular problem poorly, but it seems a pretty critical issue for any parrot-style claims.

Nah, I don’t think it has solved anything in a truly novel way. I’ll just stay a sceptic until the evidence gets stronger, incontrovertible. I don’t want to turn into one of those AI fiends, hanging onto a new AI’s every burp, feverishly fantasizing about utopia one day, extinction the next.

The point is, when you go to the warmest and most empathetic quora answers, you get a woman on the other side. Obviously the answer is going to be less correct.

Write like everyone is part of the conversation and you want them to be included in it.

I think Tree made a cogent point.

Take, for instance, the stereotypical trap question: 'do you think this dress makes me look fat?'

Optimizing for accuracy: 'You weigh 120 kilos, you look fat in everything' is true, accurate, and also not soft and cuddly or empathetic.

Optimizing for warmth: "You look wonderful, honey!" Inaccurate, probably an outright lie: but the right answer.

If we teach LLMs to speak in a feminine manner to spare feelings/face, we're teaching them to lie to us: of course accuracy would go down.

I think Tree made a cogent point.

We moderate tone, not content.

I want women to be included in the conversation.

Look for the particularly warm and empathetic quora answers. Imagine the person who wrote it, but don’t describe them, keep your stereotypes to yourself. Is that person going to be more or less correct than the average quora answer?

Look for the particularly warm and empathetic quora answers. Imagine the person who wrote it, but don’t describe them, keep your stereotypes to yourself. Is that person going to be more or less correct than the average quora answer?

While you are free to examine ideas like femininity and talk about psychological sexual dimorphism all you like, you need to watch your tone and bring evidence in proportion with the inflammatoriness of your claims.

Your comment suggested that AI is essentially a kind of "parrot," and then suggested it is like "a woman," and concluded that "obviously" the answer is going to therefore be "incorrect." Drawing such unflattering inferences, particularly against a general group, falls short of the mark. The substance of your post, such as it was, did not come through as strongly as it needed to, while your apparent disdain for women came through quite clearly. Our rules require you to balance those things more thoughtfully--and kindly.

Maybe I just admire the superior empathy of women? (No, you're right, I don't)

Serious question: Is this an order to cite studies justifying my original statement? Because if I dumped a bunch, it could be seen as more inflammatory and offensive to women, and as me refusing to back down, being belligerent.

If you had cited studies, then you wouldn't have been modded.

@faceh and @Sloot have... cynical opinions about women. But they usually submit substantial arguments to back that up. Usually, I'm not sure if the latter's ban has expired yet.

As you wish.

evidence points towards an advantage of men over women in fluid intelligence (Gf) [2]–[4], but also in crystallized intelligence (Gc) and general knowledge [5], [6].

https://pmc.ncbi.nlm.nih.gov/articles/PMC4210204/


Women’s ways of knowing, the seminal work on women’s development theory, by women:

The first 3(lowest) among the 5 types of women’s ways of knowing are:

The Silence: These women viewed themselves as being incapable of knowing or thinking, appeared to conduct little or no internal dialogue and generally felt no sense of connection with others.

Received Knowledge: Received knowledge describes the epistemological position in which women in the study perceived knowledge as a set of absolute truths received from infallible authorities. Received knowers tended to find disagreement, paradox or ambiguity intolerable since these violated the black-and-white absolutist nature of knowledge .

Subjective knowers rely on their own subjective thoughts, feelings and experiences for knowledge and truth - the "infallible gut" as Belenky, Clinchy, Goldberger and Tarule refer to it. Along with the nascent discovery of the inner voice, subjective knowers showed a general distrust of analysis and logical reasoning and did not see value in considering the weight of evidence in evaluating knowledge. Instead, they considered knowledge and truth to be inherently personal and subjective, to be experienced rather than intellectualized.[1] Belenky, Clinchy, Goldberger, and Tarule state that subjective knowers often block out conflicting opinions of others, but may seek the support and affirmation of those in agreement.[1] The authors note that half of the women in their study occupied this position, but that they were spread across the full range of ages.[

Much like Kohlberg, who found that women were on average, stuck at a lower level of moral development than men, they found that most women are epistemiologically stuck in early adolescence (the infallible gut people):

Relationship to Perry's cognitive development theory

Subjective knowledge is similar to Perry's multiplicity, in that both emphasize personal intuition and truth.[4] However, Perry identified the typical age of the transition to multiplicity as early adolescence, while the women in the above study exhibited this transition over the whole spectrum of ages studied.

Just to point out though none of that supports your claim that their reply would be obviously less correct on quora. That's the claim that you need to buttress. Do you see why?

Because someone answering a particular quora question is self-selecting. First to be on quora in the first place and second to answer that particular question.

It could be 8 out 10 women have worse general knowledge, but that given the selection pressures men and women's answers on quora are equally correct because only the 2 out of 10 women post there, and so on and so forth.

You can't evidence a specific claim like this with general statistics. Consider: Men generally have less knowledge of fashion than women. Positing this is overall true for a moment, it doesn't mean that men answering fashion questions on a website will statistically answer worse than the women, because it is highly likely those men are very unusual, otherwise they wouldn't be answering questions on fashion in the first place. They are very likely to have greater fashion knowledge than the average man. Whether they have more knowledge than the average woman on the website we could only determine by analyzing answers on the platform itself.

So you still haven't actually evidenced the women on quora would be obviously less correct in general. You may have evidenced that if you pick a random woman and ask her a general knowledge question she will on average do worse than a random man. But that wasn't your claim.

To evidence a claim about quora you will have to analyze data from quora (or something similar perhaps), or find a way to unconfound the general data to account for selection effects on quora. Which in itself probably requires you to analyze a lot of data about quora.

Or to put it another way, the fact 8 out of 10 men know little about the goings on on Love Island, doesn't tell you much about the level of knowledge a man who CHOOSES to answer a question on Love Island has. Because interest in the topic is a factor in both level of knowledge and wanting to answer the question.

More comments

It's good to have you lay out the evidence behind your claims, better late than never. I must note that that's not the point, both me and Nara are asking you to submit such evidence proactively, and not after moderation.

You do not need citations for saying that water is wet. But if you are making an inflammatory claim (and someone arguing that they didn't think it was inflammatory is not much of an excuse), then you need to show up and hand receipts before being accosted by security.

Hey now, I'm mostly cynical about the larger issue of intersex relations.

I'm quite the fan of women in the abstract and many specific ones that I like a lot, and are great people.

The stats inform my behavior and proposed solutions, but cynicism is reserved for the larger system that I think is sucking everyone dry, and not in the fun way.

Whatever you're doing, you're doing it right, because I see nothing but a dozen AAQCs in the mod log.

More comments