@MollieTheMare's banner p

MollieTheMare


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 17:56:29 UTC

				

User ID: 875

MollieTheMare


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 17:56:29 UTC

					

No bio...


					

User ID: 875

I wasn't planing on publishing the source, since my code it is a bit idiosyncratic, but I guess there seems to be enough interest.

A pastebin with the code. Uhh, I guess I didn't put a license statement. Let's say BSD Zero Clause License. Do what you want, but don't blame me if it ruing your love life.

Is there a way to publish a pseudonymous/anonymous gist on Github?

It's interesting what you suggested is almost the opposite of the scenario @Felagund suggested. I suppose a hopeless romantic would not want to risk the potentially corrosive effect of having knowingly settled. I assume that in practice you would combine some knowledge of the current rate, the steepness of the expected falloff, some pure romantic inclination, and some fear of missing out into some heuristic.

In the scenario where we keep n from above, but keep going if we still haven't found the one I do think is interesting. If we set our benchmark at r=sqrt(n), 83% of the time you find your partner before n/e. Assuming (offset) exponentially distributed utility, the expected utility in this case is about the same as in the case where we assumed halting. I guess this is like the plethora of people who marry someone they meet in college? In about 10% of the cases there you manage to find a partner before the expected window closes, and patients is rewarded with about 50% more utility (4.5 vs 3).

I then assumed some very questionable things to set the next boundaries. First, we can transpose to time as above. Second, that we care about marriage with respect to producing children. Putting geriatric maternal age at 35-40, and assuming you would just offset paternal age so we don't have to deal with an extra set of scenarios, I find a new cutoff of 320/256. I think this sort of accommodates @jeroboam's point. In that case not stopping, but being willing to continue into the danger zone, 1.3% of the runs find the one by "40." Of course expected utility is higher at 5.2, but being willing to push age, but unwilling to settle only picked up a small number of additional "successes."

In the remaining 5% cases you eventually find your soulmate with an expected utility of 6.4. You do have to wait exponentially long though, with a median age equivalent of 67, and a mean of 343!

Setting the high water mark at n/e, but being unwilling to stop is similar in utility. Now you've eliminated the 3 unit of expected utility bucket, and the 4.5 unit utility bucket has 63% weight. Your willingness to go into the (questionably) age equivalent 35-40 bucket also preserves 7% of the trials. By setting your benchmark so late though, 30% of the time you miss the critical window. The higher expected utility, I guess, represents it being totally worth it to find your soul mate, assuming there was no penalty for waiting past geriatric pregnancy age.


@self_made_human don't worry I know these simulations are entirely irrelevant to us denizen of themotte, thus the fun thread and why I included the note on n <= 7, ಥ_ಥ

first grade teacher

Based only on this, aren't the average elementary education majors IQ's 108 based on the old SAT data? The gap between 108 and 120 is still pretty healthy.

120 (80thp)

Am I messing up the IQ quantile conversion, or was there an error up-thread? Using a normal with mean 100, and SD 15 I get:

|  IQ   | p     |
|-------|-------|
| 140   | 0.996 |
| 135   | 0.990 |
| 120   | 0.909 |
| 112.6 | 0.800 |
| 110   | 0.748 |
| 108   | 0.703 |

So a relaxation to a requirement of 120 would only be a 10x wider filter rather than 20x.

I kind of assumed, based on a vague recollection of OP's claimed achievements, username (as implied major), and desire for a 135+ IQ partner that their (maybe self assessed?) IQ was at least 140. In that if it were "only" 135 it would be unreasonable to set a lower bound at 135.

At 140, the gap to 110 is 30 points, which is the same gap as 100 to 70. Or average to borderline intellectually disabled. I do think it's possible for 140 paired with 110 to work, which is why I put it as conditional on the relationship you expect with your children. Like there is a whole set of life experiences you likely will never be able to share with your children. That's sort of based off of a crude model of averaging parents IQ and assessing a 10 point regression to the mean, (140 + 110) / 2 - 10 = 115. I'd be pretty interested if someone has a less ad-hoc way of calculating this.

It can at least work in fiction though, season 6 episode 9 of House "Ignorance Is Bliss" has an IQ 178 married to an IQ 87.

but not solve as many problems

This is a common issue, solving problems tends to be effortful, and people tend to avoid it when they don't "have to" do it for homework.

If you can't solve problems you have a child's understanding. Children understand things fall down under gravity, that doesn't make them experts on general relativity.

I don't think I have ever met someone in a technical field that is both good and hasn't taken the time to do a bunch of calculation, even bona fide geniuses (like Putnam Fellow level).

From Grant Sanderson on self teaching:

I think where a lot of self learners shoot themselves in the foot is by skipping calculations by thinking that that's incidental to the core understanding. But actually, I do think you build a lot of intuition just by putting in the reps of certain calculations. Some of them maybe turn out not to be all that important and in that case, so be it, but sometimes that's what maybe shapes your sense of where the substance of a result really came from.

If you want to feel smart join Mensa. If you want to get something out of being smart, you have to put in the work.

It's the "official" builtin board style forum of (like the 3rd cousin?) of themotte. I think the relationship is roughly like:

lesswrong 
   └─> slatestarcodex ──> astralcodexten <─> datasecretslox
              └──> r/slatestarcodex ──> r/themotte ──> themotte 

Obviously the full history is a bit more complicated and there is a bunch of cross mixing between the branches.

The Fussy Suitor Problem: A Deeper Lesson on Finding Love

Inspired by the Wellness Wednesday post post by @lagrangian, but mostly for Friday Fun, the fussy suitor problem (aka the secretary problem) has more to teach us about love than I initially realized.

The most common formulation of the problem deals with rank of potential suitors. After rejecting r suitors, you select the first suitor after r that is the highest ranking so far. Success is defined as choosing the suitor who would have been the highest ranking among the entire pool of suitors (size n). Most analyses focus on the probability of achieving this definition of success, denoted as P(r), which is straightforward to calculate. The “optimal” strategy converges on setting r = n/e (approximately 37% of n), resulting in a success rate of about 37%.

However, I always found this counterintuitive. Even with optimal play, you end up failing more than half the time.

In her book The Mathematics of Love Hanna Fry suggests, but does not demonstrate, that we can convert n to time, t. She also presents simulations where success is measured by quantile rather than absolute rank. For instance, if you end up with someone in the 95th percentile of compatibility, that might be considered a success. This shifts the optimal point to around 22% of t, with a success rate of 57%.

Still, I found this answer somewhat unsatisfying. It remains unclear how much less suitable it is to settle for the 95th percentile of compatibility. Additionally, I wondered if the calculation depends on the courtship process following a uniform geometric progression in time, although this assumption is common.

@lagrangian pointed out to me that the problem has a maximum expected value for payoff at r = sqrt(n), assuming uniform utility. While a more mathematically rigorous analysis exists, I decided to start by trying to build some intuition through simulation.

In this variant of we consider payoff in utilitons (u) rather than just quantile or rank information. For convenience, I assume there are 256 suitors.

The stopping point based on sqrt(n) grows much more slowly than the n/e case, so I don’t believe this significantly alters any qualitative conclusions. I’m pretty sure using the time domain here depends on the process and rate though.

I define P(miss) as the probability of missing out or accidentally exhausting the suitors, ultimately “settling” for the 256th suitor. In that case you met the one, but passed them up to settle for the last possible persion. Loss is defined as the difference in utility between the suitor selected by stopping at the best suitor encountered after r, and the utility that would have been gained by selecting the actual best suitor. Expected Shortfall (ES) is calculated at the 5th percentile.

I generate suitors from three underlying utility distributions:

  • Exponential: Represents scenarios where there are pairings that could significantly improve your life, but most people are unsuitable.
  • Normal: Assumes the suitor’s mutual utility is an average of reasonably well-behaved (mathematically) traits.
  • Uniform: Chosen because we know the optimal point.

For convenience, I’ve set the means to 0 and the standard deviation to 1. If you believe I should have set the medians of the distributions to 0, subtract log(2) utilitons from the mean(u) exponential result.

Running simulations until convergence with the expected P(r), we obtain the following results:


| gen_dist |    r    | P(r) | P(miss) | <u> | <loss> | sd_loss | ES_5 | max_loss |
|----------|---------|------|---------|-----|--------|---------|------|----------|
|   exp    |   n/e   | 37%  |   19%   | 2.9 |  2.2   |   2.5   | 7.8  |   14.1   |
|   exp    | sqrt(n) | 17%  |   3%    | 3.0 |  2.1   |   1.8   | 6.6  |   14.8   |
|----------|---------|------|---------|-----|--------|---------|------|----------|
|   norm   |   n/e   | 37%  |   19%   | 1.7 |  1.2   |   1.5   | 4.6  |   7.0    |
|   norm   | sqrt(n) | 18%  |   3%    | 2.0 |  0.8   |   0.8   | 3.3  |   6.3    |
|----------|---------|------|---------|-----|--------|---------|------|----------|
|   unif   |   n/e   | 37%  |   19%   | 1.1 |  0.6   |   1.0   | 3.2  |   3.5    |
|   unif   | sqrt(n) | 17%  |   3%    | 1.5 |  0.2   |   0.5   | 2.1  |   3.5    |

What was most surprising to me is that early stopping (r = sqrt(n)) yields better results for both expected utility and downside risk. Previously, I would have assumed that since the later stopping criterion (r = n/e) is more than twice as likely to select the best suitor, the expected shortfall would be lower. However, the opposite holds true. You are more than 6 times as likely to have to settle in this scenario, so even if suitability is highly skewed as in the exponential case, expected value is still in favor of the r=sqrt(n) case! This is a completely different result than the r=n/e I had long accepted as optimal. The effect is even far more extreme than even the quantile-time based result.

All cases yield a positive expectation value. Since we set the mean of the generating distributions to 0, this implies that on average having some dating experience before deciding is beneficial. Don’t expect your first millihookup to turn into a marriage, but also don’t wait forever.

I should probably note for low, but plausible n <= 7, sqrt(n) is larger than n/e, but the whole number of suitors mean the optimal r (+/-1) is still given in the standard tables.

One curious factoid, is that actuaries are an appreciable outlier in terms of having a the lowest likelihood of divorce. Do they possess insights about modeling love that the rest of us don’t? I’d be very interested if anyone has other probabilistic models of relationship success. What do they know that the rest of the life, physical, and social sciences don't? Or is it that they are just more disposed to finding a suitable "good" partner than the one.