@KnotGodel's banner p

KnotGodel


				

				

				
1 follower   follows 6 users  
joined 2022 September 27 17:57:06 UTC

				

User ID: 1368

KnotGodel


				
				
				

				
1 follower   follows 6 users   joined 2022 September 27 17:57:06 UTC

					

No bio...


					

User ID: 1368

  1. $300k in debt isn't that huge when the median physician pay is $350k - at 6% interest and 40% taxes it drops your take-home from $210k to $192k.

  2. The much larger cost for doctors is the opportunity cost of going to medical school and one or more residencies. The median doctor graduates from medical school at age 30 and then still has years of residency(ies) to go. Making peanuts for a decade+ after college for the types of driven/conscientious/smart people who go to medical school is an enormous opportunity cost, dwarfing the literal debt (e.g. discount rate of 5%, forgone earnings of $100k per year, for 10 years = $1.26m at the end.

  3. It's not really an "enormous financial risk". The 6-year graduation rate from medical school is 96%, and virtually all of them find a residency. That is, an admission offer from a medical school is as close to a "golden ticket" as you can get in life. The only risk is whatever time you invest in getting in to medical school beyond undergrad - a risk that, for most people, is 0-4 years.

  4. Finally, re people being underwater. This can happen, but it usually stems from specific decisions - i.e. spending many years trying to get into medical school, switching specialties late in the game, refusing to give up on a very selective specialty, choosing to do academic medicine in a high cost-of-living area, etc.

Is it common to have role models apart from your father, for happily married parents?

Older brothers?

More generally, I guess the pre-requisite for a male role model is that you spend a significant amount of time with men as a child. I feel like, for most boys in the US, only the men in their nuclear family qualify.

We have gotten significantly more lenient since moving off of reddit, because there is more of a worry of eroding our user base and having no replacement source

Why do you believe more moderation (relative to our reddit level) would lead to greater attrition than less moderation? It's not at all obvious to me.

You can also be on the lookout for different games to play.

Do you mean leaving the company and/or deciding to put your energy into non-work things? Or something else?

leaders don't really aggregate the knowledge of their followers.

Hmm. I'm imagining something like an explicit set of users who are gatekeepers, so if I have a 10x idea, I can just convince one person to have The Powers That Be consider it? Something along those lines?

Some which could come to mind...

I think it's important to decide whether we're judging these from the insider or the outside.

If you went to work for Apple, I'm feel pretty sure you'd come away thinking it is woefully incompetent. From the outside, however, it largely appears competent. Not unlike the other FAANG companies imo. Likewise, if you actually worked as a priest in the Catholic Church in Spain in the 20th century, I'd be shocked if you felt this was what "blistering, white-hot competence" looked like. From the outside, I think EA is pretty clear amazingly competent, saving more counterfactual lives per dollar than nearly any other organization, even if you round everything hard-to-value to zero. From the inside however, ...

Re EA being less effective. Alas, it is tedious, but I fear the only way for us to reach a common understanding is point by point, starting with

The Forum

First, re moderation policy - this is something we discuss occasionally here. Blunt people think it's crazy to mod someone just because they were blunt - it drives away such people and we lose their valuable opinions! Other people think the reveres is more powerful: blunt people drive away blunt-averse people and cause the loss of their valuable opinions. I'm unfamiliar with any actual evidence on the matter.

Next, spending. The comment you link to explicitly says they would not accept 2x funding, which imo puts them heads and shoulders above the default of outside society (e.g. a team at a S&P 500 company, in the government, or at a typical nonprofit). I personally put a fair amount of weight on that kind of signal (I loved that Evidence Action closed down their bussing program for not-enough-impact reasons). I think its quite plausible that the forum's benefit of fostering an EA community creates new EAs and retains old ones to the extent that the value outweighs the $2m cost.

That being said, I think you are probably correct in your own comment in that thread in pointing out there is a margin-average distinction being elided, so the 2m probably really is too high.

That comment also links to a page on how they're trying to have impact. The task they rate as the most promising is running job ads on the forum. The second-most promising is helping recruiters find potential candidates. Those seem reasonably valuable to me, but, I'd still guess the EV is less than $2m.

That being said, there are some ameliorating factors:

  • The whole analysis depends on how much you think EA is money-constrained versus talent-constrained - fwiw Scott leans more towards the latter. FWIW, this takes the cake for the biggest misconception that new-to-EA people have - that money-constraints are the primary issue.
  • Building on that, the budget appears to have absolutely ballooned with the help of FTX funding. If this is true, it's unclear what exactly the counterfactual alternative was - i.e. was this money earmarked specifically for this forum? for community outreach? idk. Certainly, SBF's donations were not entirely effectiveness-driven.

Ultimately, I'm inclined to agree that $2M is too much, without having context on how the budget was determined, I'm not sure how much of a black eye this should be on EA as a whole.

Criminal Justice Reform

When I went through Open Philanthropy's database of grants a couple years ago, I felt only about half its spending would fall under conventional Effective Altruist priorities (e.g. global health, animal welfare, X-risk). That is, I've felt for a couple years that Open Philanthropy is only about half-EA, which, to be clear is still dramatically better than the typical non-profit, but I don't personally them funding a cause as equivalent to the EA community thinking the cause is effective. #NoTrueScotsman

I'm going to be honest - I do not, tonight, have the time to go through the two "alternatives" links with the thoroughness they deserve

Point #2, #3, and the first half of #4 are reasonable for EAs and potential-EAs to know, but it's unclear that any of them constitute problems. For instance, to what extent is it bad that one billionaire moves a majority of the resources? To the extent it is bad, what realistic alternative would be better?

The second half of #4 refers to a problem that will plague any member who does hard-to-evaluate work at any large organization - i.e. nearly all white collar work, and a significant amount of government and blue collar work: namely that success depends on the perception of your work's value rather than its value, which gives you a dichotomy:

  • myopically focus on growing the pie (i.e. providing value) while studiously ignoring how the pie is distributed (i.e. whether you get grants, raises, etc)
  • learn to play the game and do to the extent it helps you sell

Obviously, there is a spectrum here. This is frustrating for the more scrupulous people, but beyond bad actors gaming the system, there are a number of causal reasons this dynamic persists:

  • the person most knowledgable about your work is you - your manager (or customer) frequently knows ~10% as much as you
  • unlike in school, it is typically difficult to tell whether you are 2x slower than your coworker or if your task is 2x harder - this is made especially difficult when skillsets are diverse
  • managers don't typically like evaluating people, so they're tend to avoid it by minimizing the amount of energy they put into it
  • and, yes, managers who optimize for signal rather than value tend to get promoted

The parallels for the hard-to-measure parts of EA are straightforward. This sucks, and I agree it's a "problem", but it's hard for me to imagine a clear solution. You seem to think it would be better if

  1. powerful EAs spent more time responding to comments on EA forum
  2. more grassroots-esque grants were given like Scott Alexander's

I intuitively agree with #2, but #1 seems really unclear to me. Commenters are nearly always less well-informed than the decision-makers, so it's unclear to me that this is actually a good use of the decision-makers' time. Maybe they could hire PR people to do this for them? Is that a good use of EA money? idk - maybe. But I suspect this would make you more upset rather than less.

blistering, white-hot competence

Can you give an example of any multi-billion dollar movement or organization that displays "blistering, white-hot competence"? If not, maybe your standard is unreasonable?

To sometimes take their funding, but to do your own thing and preserve your ability to comfortably leave

This seems blatantly anti-social and immoral.

Ultimately, this critique seems to fundamentally be an attempt to take someone whose genuine values match EA-the-philosophy and warn them that EA-the-movement differs, which is all well and good. However, it might be stronger, if you

  1. provided concrete evidence that interventions are less effective than claimed
  2. offered concrete alternatives to this target audience.

Wait, how could I have conveyed that idea in a way with less antagonism than I did? [Edit: I didn't want to assume he believed this (which is putting words in his mouth and against the rules) - so the only option if I wanted to engage was to ask for clarification]

I don't respect sore losers

Would you consider the Right good losers?

Depends on your definition of “caring”

¯\(ツ)

As an example, I have a very specific explanation of how my caring has changed. You decided to simply assert that this change doesn’t count as “not caring” to you.

I could practice some “Outside View” and wonder whether you might be right - but then I remember that the Outside View presupposes the other person is actually adding valuable information and not just trying to “win” points at my expense

Past me: I got downvotes; what is wrong with my comment?

Present me: I got downvoted; what does that imply about the community doing the voting?

In your second link, I was responding to someone who was misinterpreting my point, and putting words in my mouth, which I seem to recall is itself against the rules. But whatever, I've reported similarly antagonistic comments with no mod action.

I maintain that this community is mostly rationalized as a place to "debate" and find which ideas survive, while mostly fulfilling the members' needs to vibe/whine - i.e. reinforce that they are smart and everyone else are either idiots or evil. Anyone who hinders this process of self-validation gets downvoted and/or negatively commented on.

For instance, you want to make a completely unsubstantiated partisan quip? 42 upvotes. You respond with actual statistical evidence? 1 upvote. Makes it pretty clear where the priorities of this community are.

That is to say, after many years, I've finally let go of caring about what strangers on an internet forum think about me. In the famous words of Rick Sanchez, "Your boos mean nothing, I’ve seen what makes you cheer."

Wrong.

I’m saying OP’s method of evaluating complaints is shit.

Let me make this very concrete for you

  1. Everyone complains about things holding them back that aren't there fault.
  2. It is common practice for this to be a social endeavor, and for people to avoid voicing disagreement, because that is considered anti-social, since people playing the game Poor Old Me generally don't want to play the game everyone here is addicted to: Debate Me
  3. If I complain I'm not X because I'm an A, and you reply that people who are not-A are also not-X, you haven't actually provided any evidence that the causal claim I was making is false.
  4. Even the most successful people can correctly point to things that held them back.

In other words, if we apply the standard of discourse used by the OP, we can validly whine about anyone's whining. That standard of discourse is, in a word, shit. It only appeals to people who have been mind-killed.

The specifics about Trump absolutely don't matter. I could point to any person or demographic, and there would be things they whine about holding them back. I could make a post exactly like the OPs regardless of whether those factors had any basis in reality.

I realize this forum is mostly a place to vibe/whine.

Sorry for killing the mood /s

My point is that this method of reasoning is garbage that only seems useful when you are mind killed.

The specifics hardly matter.

Allow me to paraphrase your complaints from the other side of the aisle:

Trump will tell his supporters that, of course he lost in 2020 - The Establishment is manipulating things behind the scenes - everyone knows that. But Trump literally won in 2016! The media makes much ado about Biden's "dementia"! What idiots those Republicans all are! Isn't it shocking that everyone confirms/affirms this explanation!?

And what about White kid Who Was Rejected By Harvard, because of affirmative action. He literally got into U Chicago! What about all the black kids Harvard rejected!? It truly boggles the minds.

If even Trump explains the world to himself this way, what is a normal Republican supposed to think? A poor white trash family in a trailer park? How can self-exculpatory models of the world be eradicated in people with somewhat credible claims to oppression when they are so popular even among the most privileged members of society?

Please proactively provide evidence in proportion to how partisan and inflammatory your claim might be

Virtually no one here takes that rule seriously.

Empirical Verification of Sailer’s Law - there is some truth to this "law" (the odds ratio is about 2.4:1) but thinking it is anywhere near conclusive evidence is crazy

My response was an attempt to give OP exactly why they asked for (steelman arguments for veganism). The “low effort” rule is intended to exclude “three word shit-posts”, which mine is definitely not.

Their responses were literally just boo-out-group and therefore almost the epitome of the “three word shit-post”. Therefore, they consisted of entirely 100% rule-breaking content.

Under your odd interpretation that relies only on length, any comment consisting of a simple clarifying question would now be allowed to be responded to with shit posts.

Edit: when some members of this forum are EAs/rationalists/veterans - that distinction is pretty narrow.

Gotcha - on first reading, I misinterpreted it as

To treat liberalism as an inevitable endpoint, or a universal truth, or some manifestation of the underlying laws of the universe; it undermines [the principles and values that] made liberalism triumphant and successful.

which triggered my confusion. Based on what you said, the intention is more along the lines of

To treat liberalism as an inevitable endpoint, or a universal truth, or some manifestation of the underlying laws of the universe; it undermines [the courage, actions, and habits that] made liberalism triumphant and successful.

You should've just said something like

I accept that as weak evidence against a Leviathan-shaped hole existing

,

Seeing as I find veterans insufferably obnoxious this isn't a surprise to me.

Those kinds of quips don't result in any Mod response and can net you quite a few upvotes! :D

@Amadan

To treat liberalism as an inevitable endpoint, or a universal truth, or some manifestation of the underlying laws of the universe; it undermines what made liberalism triumphant and successful.

What does this mean?

I think there are two important lenses here.

Via the probability-theory lens, we must distinguish between

  • the propensity for the coin to land on heads - unknown
  • the subjective (in the Bayesian sense) probability Yudkowsky assigns to the coin landing on heads on the next flip

Under a Bayesian epistemology, the former is reasoned about using a probability density function (PDF) by which (approximately) every number between 0 and 1 is assigned a subjective probability. Then, when we observe the flip we update using the likelihood function (either x for heads or 1-x for tails). What you're talking about is essentially how spread out Yudkowsky's current PDF is.

The other lens is markets-based, which I've touched on before. Briefly, for reason that are obvious for anyone in finance, there is a world of difference between

  • believing a stock is worth X
  • offering to buy the stock for X+0.01 from anyone and sell it for X-0.01 to anyone

In real life, the bid-ask spread that market makers offer depend on a great number of factors including how informed everyone else in the market is relative to themselves. On this lens, credible intervals (or whatever phrase you want to use) are not things individuals have in isolation, they are things individuals have within a social space: if you're with a bunch of first-graders, you might have a very tight bid-ask spread when betting on whether a room-temperature superconductor was just discovered; if you're with a bunch of chemist PhDs, you're going to adopt an extremely wide spread (e.g. "somewhere between 5% and 95%").

Ok, but, again - all the court case proves is that one company wanted an IQ test and at least one organization was willing to sue them for it?

For all the evidence that has been presented in this thread (i.e. none), media backlash was sufficient to prevent literally every other company in the US from using IQ tests. I'm not saying this is actually true - just that the existence of a court case is not, inf act, good evidence that "media backslashes would not have been enough."

I would consider court case itself as an evidence that media backslashes would not have been enough

not have been enough to what?

Doesn't the court case merely prove at least one company wanted an IQ test and at least one organization was willing to sue them for it?

Specifically, GSS data showed that 63% of young men reported themselves as single while only 34% of young women did. This was of course immediately seized upon as proof that a huge proportion of girls are in “chad harems.” Since nobody bothers to read beyond a sensationalist headline, not many dug deep enough to discover that this proportion has been roughly the same for over thirty years, so if the chadopoly is real, it’s been going on for a long time

When I looked into this, I found that, across all age groups, the implied number-of-non-single people was roughly equal in both sexes. This strongly suggests the factor driving this are a large number of younger-woman-older-man pairs.

You know - I've done some searching (Google, Google Scholar) and llm-ing (GPT, Claude, Bard), and I can't really find any evidence I would consider strong

  1. in favor or against significant media backlash against IQ testing potential hires
  2. in favor or against the claim that IQ tests were common before Griggs
  3. in favor or against the claim that IQ-testing of potential hires significantly decreased post Griggs

So, I guess I'm gonna revert to agnosticism.