site banner

Friday Fun Thread for February 3, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

5
Jump in the discussion.

No email address required.

We should just ignore these surveys, the selection effects for people who respond are much too strong. Doesn't tell us much about regular people.

I cannot endorse simping of any stripe, whether it's huffing a girl's farts or Aella's brainfarts.

Aella's web presence is manifestly great for driving attention to Aella. Not sure what anyone else gets out of it.

EDIT: Scrolled down and saw I was beaten to it lol.

They tell us about Aella's followers, and I find that mildly interesting. It explains more about the techy coomer demographic.

Sure, I more meant that it's not generalizable.

I think most of the directional correlations would probably be generalizable. Like if you see from the survey, men in Aella's demographic are more likely to enjoy visual porn than the women in Aella's demographic, you can infer men in general are more likely to enjoy visual porn than women in general. Even if it tells you nothing about the actual specific rates men in general enjoy visual porn. Or are there any correlations listed that you think would be reversed if we redid this survey but asked everyone instead of just Aella's followers?

Or are there any correlations listed that you think would be reversed if we redid this survey but asked everyone instead of just Aella's followers?

I think most of these are likely to generalize, but enough won't that it's useless to attempt to learn anything from the survey. I don't know these sorts of statistics off the top of my head, so I'd have a hard time pointing to particular results, but if you put a gun to my head then I'd focus on answers particularly likely to be related to Aella's userbase or to the nature of surveys. So here are a few that I think won't generalize:

  1. The world would be better without Christianity (women are much more religious than men)

  2. If I could live anywhere in the world without paying more... I would move (I expect men would move at greater rates)

Basically everything about socially conservative people, because I think socially conservative people who follow Aella are particularly weird and unlikely to match the broader population.

I don't really care to make predictions about the sex stuff save to say that I think the data is wrong enough to not be worth learning from.

I want to reiterate that these are totally unprincipled, and I'm basically pointing at some of the survey responses which conflict most with my intuition. In other words the survey just hasn't shifted my priors one bit.

Fair enough. To me the survey is mostly just something of mild interest, not something that I'm building world views around. And I think it could be useful as a jumping point for more rigorous study. Like you notice that "The world would be better without Christianity" demographics isn't a result you would expect, so you go do a better more unbiased survey to see if it is the truth or just a result of Aella's sampling bias.

This is a weakness of any survey conducted ever. Can you put forward a mechanistic reason why that effect would be especially bad for this specific survey?

"The selection effects are much too strong" is not true of all surveys. Sure, many select for "people who answer the phone" or "people who answer the door", but either of those categories is much more similar to the average American than "people who follow Aella."

Alright how is people who follow Aella going to confound the results of this specific survey.

It should be obvious. Anyone who follows Aella and is willing to complete (an average of) 100 questions is highly atypical relative to the average person. They'll generally be far more sex-focused, techy, liberal, young, urban, intelligent, white, unmarried, open to new experiences, etc. than the average person. This will have an enormous effect on survey results, especially since most of them are so focused on sex. It's like taking a survey on religious beliefs, but only including active scientologists. Sure, there is probably something to learn from the survey, but if you're trying to generalize to any larger population, the noise will far, far outweigh the signal.

Yep

This isn't true though - just the headline 'Statements most disproportionately agreed upon by women/men' are true across the general population. Women do like written porn more than men, men do want to 'bang hot 15yos' more than women. Which makes other, less obvious results more plausible.

Not really. It seems intuitive to me (from long personal experience of perverts and deviants, I got Gladwell's mythical 10,000 hours in) that the normal stuff and the weird stuff don't correlate. The average sexually submissive freak who is into degradation still likes football more than hockey, still puts pants on one leg at a time, still prefers Coke to Pepsi. But they're much more likely to be in favor of sexually submissive degradation shit. You can't say: well the audience prefers Coke to Pepsi, so in audiences that prefer coke to pepsi they also have a 49% incidence of liking getting spit on.

Exactly, just because it gets 1 thing right doesn't mean everything is right.

"These people don't represent the general population."

"The biggest study findings line up with what we know about the general population."

This points less towards "these people match the general population, and we can trust the findings" and more towards "these people are not so different from the general population that their most obvious results differ." There's certainly some useful data that can be extracted from these sorts of surveys, but the powerful confounding effects mean that unless you're a perfect computational machine, I think you're as likely to be misled by the data as to learn something useful.

I'm not saying all, or even most, results from this survey will generalize. But given the big results were all generalizable, it's likely that some (at least half of) smaller results will generalize too (relative to how well they would in a representative survey). And even results that don't generalize will still be interesting, because they're about a group similar to us. Yes, one should be skeptical, but "We should just ignore these surveys", based on that, is not reasonable. Also see decent-accuracy political polls with Xbox users - nonrepresentative data can be useful, although I don't think it's as useful as that paper would suggest.

The problem with surveys, imo, is a combination of: the people only answer them as well as they themselves understand the topic (which isn't very well), people not putting much effort into answering the surveys, and people treating surveys as 'things to give vaguely correct answers to', like you would a homework assignment, rather than 'serious topics my answer matter on'. When someone answers a question like 'All pedophiles (including non-offenders) should face the death penalty ' their answer isn't going to try and situate those ideas in any broader context, it's more a simulacra of attitudes than a real 'belief'. Executing non-offending pedophiles, which is a private desire, would be ... very unusual in the current justice system - especially if one (as many conservatives would) interpret someone who likes loli to be a pedophile - but nobody's considering that when answering the question. And if Joe believes 'non-offending pedos should be executed', but wouldn't actually endorse 'immediate arresting and lethally-injecting people who post on /r/ageplaypenpals' if it was on a ballot, is that a real belief? This doesn't prevent polls from being interesting, but does mean they're not solid ground to stand on.

Or take the dream question - apparently women have more vivid dreams than men -'My dreams are vivid (1.13)', 'There have been mornings where I could recall 4 or more different dreams. (1.02)'. Is this true? Is it just 'women remember dreams more, but men have equally vivid dreams they forget'? Maybe women talk about their dreams more in idle topic-agnostic chatter so end up recalling more dreams? My sense is most 'interesting' poll questions like this have causes that aren't what the poll facially implies.

But given the big results were all generalizable, it's likely that some (at least half of) smaller results will generalize too

Agreed, but even if 75% of the smaller results generalize, there's a high enough failure rate that it's very risky to update at all based on any particular survey result.

And even results that don't generalize will still be interesting, because they're about a group similar to us.

Eh... how similar though? I agree it's interesting but I don't particularly want my intuitions informed by evidence which may be quite faulty.

Also see decent-accuracy political polls with Xbox users - nonrepresentative data can be useful, although I don't think it's as useful as that paper would suggest.

Xbox users are much closer to the typical person than Aella followers are. I agree that nonrepresentative data can be useful, but at the same time, this is a very sexuality-focused person asking her sex-focused followers about sex questions, so this seems uniquely likely to not generalize.

I think we probably agree that there is some threshold of study quality below which it's not really worth paying attention to the results at all; we just may disagree on where that threshold is and where this survey lies. My threshold for survey quality, above which I actually pay attention to what it says, is very high because I think most studies generally get things wrong. I also think this survey is quite low-quality. Based on your wordings such as:

apparently women have more vivid dreams than men

it sounds like you think this survey passes that threshold, despite that question having only a 1.13 average difference between men and women. I think it is very reasonable to simply ascribe that difference to confounders. Even something simple like a difference in average age between men and women (which seems quite plausible) could easily be enough to explain that difference on its own, and there are probably ~10 other equally likely confounders that could explain it.

There's a chance that the survey result to that question is genuine, but given all the extremely powerful confounders that could push it one way or another, I think the most prudent course of action is to simply ignore it entirely and not update at all based on it.

there's a high enough failure rate that it's very risky to update at all based on any particular survey result.

Oh, to clarify, I don't think it's a good idea to take any survey result as 'true, because it's in the survey'. That's a very high standard - I wouldn't even say that about a lot of large RCTs or meta-analyses in medicine (you're not uniformly sampling them, and the characteristics of an "interesting" RCT to a random person makes it more likely to be wrong somehow) . (e.g. fluvoxamine, which a lot of rats made a massive deal over because of a few trials, ended up not showing benefit in later trials, i think). And most survey-readers are much too credulous about the results, whether it's a serious poll or a fun one like aella's. But this survey is interesting to look through and see potential associations, and then investigate them more.

it sounds like you think this survey passes that threshold

I think there's a decent (50%? idk) chance that'd generalize to the general population. Aella claims it replicates in other studies, although I didn't find any on google scholar quickly. My choice of the dreams + pedo examples was to argue that, even though such associations probably are present, I don't think they're that interesting.

Sure, I don't think you're taking the survey as absolute fact either. What I mean is that it's low-quality enough that (as an imperfect human) I don't consider it evidence at all. If I were a perfect bayesian updater then I could consider all the relevant factors, weigh hypotheses, etc. and update my beliefs by 0.01% towards believing that women dream more than men, but I'm not perfect, so it's safer to just not update them at all based on such terrible evidence.

But this survey is interesting to look through and see potential associations, and then investigate them more.

That's true, there's some value to it there, but again I'd be a little worried about it coloring my beliefs about things if I thought about it too much. This sort of data is very hard to find elsewhere, but can really color your day-to-day interpretations of how people act in real life. Since it's so hard to prove or disprove, those biases can just stick around for a long time if you let them.

If I were a perfect bayesian updater then I could consider all the relevant factors, weigh hypotheses, etc. and update my beliefs by 0.01% towards believing that women dream more than men

I don't like 'bayesian thinking' as an idea (and think 'thinking's bayesianness is overstated in rationalism). It's entirely possible to see something, and then say 'huh, that could be true, and it'd be interesting if it was', and then spend time evaluating how plausible it is / looking for more evidence for it without that corresponding to a probability. You can be smart enough to consider unlikely hypotheses without it contaminating your probabilities. And this still adds up to 'the results haven't told me anything new of meaning' for me anyway.

Even if these surveys really were 'coloring your beliefs', I think the best move would be to read so many of them that you viscerally notice the contradictions and absurdity, and then stop having them color your beliefs. Otherwise, all sorts of random things people say will 'color your beliefs', even if you don't seek them out.

It's entirely possible to see something, and then say 'huh, that could be true, and it'd be interesting if it was', and then spend time evaluating how plausible it is / looking for more evidence for it without that corresponding to a probability.

Yeah, and this is pretty much what I'm referring to when I say it's not worth the time. The responses in the survey are of so little value to me, and so unlikely to be related to the truth, that I don't want to spend any cognitive energy investigating them. I'd spend more time/energy on them if I had more, the data seemed more valuable, or the conclusions lined up in interesting ways with things I already believed.

I don't like 'bayesian thinking' as an idea (and think 'thinking's bayesianness is overstated in rationalism).

Mostly agreed here--it's one of many useful cognitive tools, nothing more. I like it more as a means of informing my normal thinking process than as an actual way to think.

Even if these surveys really were 'coloring your beliefs', I think the best move would be to read so many of them that you viscerally notice the contradictions and absurdity, and then stop having them color your beliefs.

I already do, and I did scan through this latest survey, I just don't think its results should rise to the level of "this could be true and it would be interesting if it was." Investigating these hypotheses takes cognitive energy which could be spent on more worthwhile hypotheses. That's what I mean when I say we should ignore the results. Surely most of them are true but that alone doesn't give the study any value; it has to actually be insightful somehow.

More comments