site banner

Friday Fun Thread for December 16, 2022

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

4
Jump in the discussion.

No email address required.

If you're actually willing to bet a substantial amount at 99:1 I'll happily take the flip side of that bet, conditional on us being able to work out an experimental procedure we both agree on (but I'd expect that we could in fact come up with such a procedure).

I probably wouldn't take you up on that at 4:1 though. 99:1 is just a really extreme odds ratio.

"In the real world, with significant effect sizes" were important qualifiers there - so if it replicates but doorways make you 3% more likely to forget (instead of like 30% as in the study) wouldn't count, and I'm not even sure what to think about video game doorways having similar effects to real world doorways. I wouldn't offer 99% on video game doorways, or it at any effect size. And a real world experiment would need blinded randomization of some sort. I mean, I guess you could randomly generate some colored shapes on your computer, tab to notepad, walk around a room / through a doorway and back, and come back and write the guesses into notepad. But you'd need a lot of trials, and that sounds annoying. I'd consider the bet IRL, but anonymity concerns mean I probably won't online, I wouldn't even accept free money on this name,

A quick look at google scholar - most of the work on this topic is done by Gabriel Radvansky and co, who've published a lot of work on this topic in all sorts of subgroups and permutations. Aside from him, three papers from different groups - lawrence finds it works if yoiu imagine the doorways too (totally), mcfayden et al finds "Across this series of experiments ... we observed no significant effect of doorways on forgetting". Also, from that paper, "To follow up the null effect of shift, we computed a set of Bayes- ian paired t-tests (using JASP v0.9.2.0, default Cauchy prior width = 0.707) and found that there was suffi- cient evidence for the null hypothesis for there being no effect of shift on associated (BF01 = 3.870) or dissociated (BF 01 = 3.590) probes.". "Follow-up Bayesian paired t-tests supported the null hypothesis for there being no effect (as opposed to an underpowered effect) of shift on the bias parameter for associated (BF01 = 4.693) or dissociated (BF01 = 4.710) probes." Their experiment 2 appeared to find a doorway effect, but they dissect the data and say it's for another reason - doorways increased 'false alarm rate to negative probes'. So idk. It's funny how Radvansky and friends have so many positive results, but when narrowed down to other groups one quickly encounters negative results.

99% is an extreme odds ratio, but there are a lot of cute little psychological effects like this (being in a red room? it makes you ANGRY!). And, I don't like bayesian epistemology really, but even if you have a 10% (still quite low) prior on these cute effects being true, over thousands of effects some psych prof's hallucinating, that's still a lot that end up true. Not that experimental psychology is all bunk - stuff like you color the word 'red' green and it takes longer to say it are much more plausible.

In the real world, with significant effect sizes" were important qualifiers there - so if it replicates but doorways make you 3% more likely to forget (instead of like 30% as in the study) wouldn't count, and I'm not even sure what to think about video game doorways having similar effects to real world doorways

How would you feel about the following test?

  1. Have some variety of demanding, finicky task you want done on mturk. Probably something like "read a dull passage of marketing copy that is long enough to require scrolling, answer a couple questions about it".

  2. Set the task up such that it is structured in batches of 10.

  3. During the 8th task of 10, once they've scrolled a bit down, have a janky popup that blocks the screen and says "Time for a break! (Staying hydrated by drinking a glass of water|Maintaining your bloodflow by having a good stretch) can help you feel more rested "Feeling tired? {Grabbing a drink of water|Taking a moment to stretch} can help you retain focus." and a 60 second countdown.

  4. When they get back, have a "bug" where they can't scroll back up.

  5. Measure performance on each task in the batch.

A positive result would look like "performance on task 8 was better in the good stretch than the cup of water group, p < 0.02" and also like not finding performance differences on tasks 1 to 7 with p < 0.02 using the same methodology, and would also only be considered positive if the effect size were substantial (1.2x as many mistakes in the water group as in the stretching group, say).

BTW, for your reference I'd estimate probably 2-5% that the above would get a positive result. The doorway research totally smells like the type of research that'll fail to replicate. It just doesn't smell like the type of thing where, if I said "that smells like it won't replicate" 100 times, I would only expect to be wrong one of those times.

(also yeah, after writing that all out it sounds like a lot of work to test, so unless we're talking about a quite large 99:1 bet I actually don't have the attention span for it).

I don't think a significant fraction the mturkers would actually get up and get a glass of water, at a guess. You're sitting down at your desk to do a bunch of mturking, maybe you'll tab out to reddit for 60 seconds, but get water because it's on the screen?

If the 'door effect' was real, and the stretch and water suggestions worked every time, why wouldn't stretching have a similar effect? It's a visceral distraction. lol

and even without said problems, still seems very annoying