@KnotGodel's banner p

KnotGodel


				

				

				
1 follower   follows 6 users  
joined 2022 September 27 17:57:06 UTC

				

User ID: 1368

KnotGodel


				
				
				

				
1 follower   follows 6 users   joined 2022 September 27 17:57:06 UTC

					

No bio...


					

User ID: 1368

  1. SBF is not the same as either of those. ETA: not to mention the incredible irony, since this website stems from literally the same lineage
  2. Why does it matter who says the argument?
  3. Given your bad luck so far, I don’t know why you’re being so picky about where the argument comes from

If they believed their altruism was ineffective they wouldn’t do it

Everyone thinks their own altruism is effective. EAs deliberately try to choose the most effective, which the vast majority of people don't consider attempting and act confused if you suggest it.

A central question is who counts as an expert and in what context.

To most people, I think the answer is implicitly something like "any professor, reporter, or politician on twitter".

My lived definition is closer to "meta-analyses, literature reviews, textbooks, and institutionally backed datasets".

If your lived experience is mostly on buzz-based social media, I can see why you'd distrust experts. If it's based on peer-reviewed literature reviews, not so much.

For example, re HBD, surveys show that intelligence researchers largely agree genes are a likely cause of intelligence differences. Sure, "twitter experts" might express extreme confidence that it is not and that you are racist if you think so, but... who counts as an expert to you? Why do you define expert in that way?

I think many people on this forum weigh expertness by power - i.e Fauci is The Expert on Covid. I don't. Neither perspective is wrong. The people weighing with power care about political consequences. I care about understanding the truth of matters. Different goals/values.

I think there are two levels to an answer here. The first is to take your framing and dive into the difference between intelligence and perceived intelligence. I think there are two important things here: motivation and legibility.

Plenty of genius-level people just don't spend their time in smart-seeming disciplines. One of the smartest people I knew graduated from college at the age of 18, was an international chess master, and wend to work at a FAANG. Then he left to become a baker. This outlier aside, I suspect that when a man realizes he's good a math/writing/etc, he is disproportionately likely to reorient his life to spend oodles of time on the subject. This brings me to...

Legibility. The guy in class answering all the questions may or may not be the smartest - but he definitely appears the smartest. The guy who aces the test may or may not be the best in industry or in research, but a test score is much more legible than the latter two. Intelligence related to emotions, socialization, and even words are all much less legible than intelligence related to math and coding and engineering. But it is crucial (in life, really) to avoid conflating "harder to measure" with "less important". Also, having the confidence/narcissism to state and defend your beliefs is probably only loosely related to intelligence, but probably strongly related to perceived intelligence.

As an aside intelligence is academically defined as the principle component vector of academic test scores. Do you know what one of the strongest predictors is? Vocabulary size (r=0.83), followed by similarities (r=0.80). Both are much stronger predictors than arithmetic (r=0.68). Yet, a math-smart person is typically considered by people to be obviously smart, while intelligence in other disciplines is less obvious.

The second level is psychological. Why do you care about your partner's intelligence?

Examples:

  • You worry about her ability to make money.
  • You worry about other people's opinions of your intelligence.
  • You worry that your difficulty seeing her as smart indicates a character flaw on your part.

If this were me thinking through this about myself, I would also ask myself why, on some level, I want to believe she is less intelligent. On an internet forum, such a presumptuous question is probably out of place. I will note though, that as someone widely considered a "math wiz" growing up

  1. It is psychologically comforting to believe that being math-smart is super special awesome
  2. If you spend thousands of hours doing math... you're going to end up believing that math is Important. The same is true of anything you spend time doing.

Inflation helps debtors and hurts savers

Unexpected inflation helps debtors and hurts savers. That's a pretty big difference, and I see no reason to expect the market to be biased in its inflation forecasting with either a gold standard or fiat currency.

you’d kind of suspect a country that money isn’t sound to become less interested in investing long term

Sure, price/inflation volatility is bad. The obvious next question is whether a gold standard makes inflation volatility smaller or larger. My money is on "larger" for the US, but I admit it's a hard question to answer.

perhaps more likely to invest in more volatile assets

I don't think so?

Per the Markowitz Model, if you are allocating your money between a risk free asset that returns R and a risky asset that returns Norm(µ, V), you should put this proportion in the risky asset to maximize expected utility:

(µ - r) / (2 * e * (1-t) * V)

where e is a risk aversion parameter and t is the capital gains tax rate. Inflation should push both µ and r up the same, so it shouldn't affect how much is invested in risky assets. If you add extra variance, U, to both the risky and riskless investments, the formula doesn't change, either, so I'm skeptical price volatility changes asset allocation either.

Also, you’d suspect bankers to become worth more as the Cantillion effects run rampant encouraging the financialization of the economy.

Again, I'm pretty sure the Cantillion effects are only worse if you believe the gold standard reduces CPI volatility. That has yet to be argued here, let alone proven.

I wonder if the well funded caravans of migrants we see in some areas of the world have to some extend to do with funding related to EA.

I wonder if your wondering is done in good faith 🤔

Then there is Open A.I. and Chat GPT and effective altruists have been influential in Open A.I. Chat GPT has liberal bias. https://www.foxnews.com/media/chatgpt-faces-mounting-accusations-woke-liberal-bias

I think extremely few people (maybe even no one) pursue making LLMs liberally biased for EA reasons.

Climate change and veganism are two issues that could well lead to hardcore authoritarian policies and restrictions.

Since when has a group representing 3% of the population (vegans) taken enough power to implement "hardcore authoritarian policies and restrictions"?

Like with all identity movements, to elevate one group such as animals you end up reducing the position of another group, such as humans

Only for unhealthy minds, I think? Whether freeing slaves "reduced" the position of non-slaves is a question without an objective answer - only psychological interpretations. For instance, many Indians never eat meat and would tell you they don't feel "reduced" by this.

It does seem that at least a few of the people involved with effective altruism think that it fell victim to its coastal college demographics

That post is just describing regression to the mean, which every informal group encounters. Nothing unique to EA here.

My other conclusion related to the open A.I. incident as well is that the idea of these people that they are those who will put humanity first will lead to them ousting others and attempt to grab more power in the future too. When they do so, will they ever abandon it?

The same could be asked about any group with any large goal: companies, nonprofits, religious organizations. Nothing unique to EA here.

That this action is dishonorable matters

How do we know it is dishonorable?

This means that Sam Altman won't be the first.

won't be the last?

It also means that we got a movement very susceptible to the same problems of authoritarian far left movements in general of extreme self confidence to their own vision and will to power.

Do you have evidence EAs suffer from "extreme self confidence"?

This... encourages the power hungry to be part of it as well.

Again, this isn't unique to EA. Any group with money/power attracts the power hungry. What's your point?

I think there are two important lenses here.

Via the probability-theory lens, we must distinguish between

  • the propensity for the coin to land on heads - unknown
  • the subjective (in the Bayesian sense) probability Yudkowsky assigns to the coin landing on heads on the next flip

Under a Bayesian epistemology, the former is reasoned about using a probability density function (PDF) by which (approximately) every number between 0 and 1 is assigned a subjective probability. Then, when we observe the flip we update using the likelihood function (either x for heads or 1-x for tails). What you're talking about is essentially how spread out Yudkowsky's current PDF is.

The other lens is markets-based, which I've touched on before. Briefly, for reason that are obvious for anyone in finance, there is a world of difference between

  • believing a stock is worth X
  • offering to buy the stock for X+0.01 from anyone and sell it for X-0.01 to anyone

In real life, the bid-ask spread that market makers offer depend on a great number of factors including how informed everyone else in the market is relative to themselves. On this lens, credible intervals (or whatever phrase you want to use) are not things individuals have in isolation, they are things individuals have within a social space: if you're with a bunch of first-graders, you might have a very tight bid-ask spread when betting on whether a room-temperature superconductor was just discovered; if you're with a bunch of chemist PhDs, you're going to adopt an extremely wide spread (e.g. "somewhere between 5% and 95%").

Assortative Mating and the Industrial Revolution: England, 1754-2021:

Abstract:

Using a new database of 1.7 million marriage records for England 1837-2021 we estimate assortment by occupational status in marriage, and the intergenerational correlation of occupational status. We find the underlying correlations of status groom-bride, and father-son, are remarkably high: 0.8 and 0.9 respectively. These correlations are unchanged 1837-2021. There is evidence this strong matching extends back to at least 1754. Even before formal education and occupations for women, grooms and brides matched tightly on educational and occupational abilities. We show further that women contributed as much as men to important child outcomes. This implies strong marital sorting substantially increased the variance of social abilities in England. Pre-industrial marital systems typically involved much less marital sorting. Thus the development of assortative marriage may play a role in the location and timing of the Industrial Revolution, through its effect on the supply of those with upper-tail abilities.

ETA: bolded the most important sentence

It's existence mostly serves to confirm the validity of the 1d model

You can also be on the lookout for different games to play.

Do you mean leaving the company and/or deciding to put your energy into non-work things? Or something else?

leaders don't really aggregate the knowledge of their followers.

Hmm. I'm imagining something like an explicit set of users who are gatekeepers, so if I have a 10x idea, I can just convince one person to have The Powers That Be consider it? Something along those lines?

Some which could come to mind...

I think it's important to decide whether we're judging these from the insider or the outside.

If you went to work for Apple, I'm feel pretty sure you'd come away thinking it is woefully incompetent. From the outside, however, it largely appears competent. Not unlike the other FAANG companies imo. Likewise, if you actually worked as a priest in the Catholic Church in Spain in the 20th century, I'd be shocked if you felt this was what "blistering, white-hot competence" looked like. From the outside, I think EA is pretty clear amazingly competent, saving more counterfactual lives per dollar than nearly any other organization, even if you round everything hard-to-value to zero. From the inside however, ...

Re EA being less effective. Alas, it is tedious, but I fear the only way for us to reach a common understanding is point by point, starting with

The Forum

First, re moderation policy - this is something we discuss occasionally here. Blunt people think it's crazy to mod someone just because they were blunt - it drives away such people and we lose their valuable opinions! Other people think the reveres is more powerful: blunt people drive away blunt-averse people and cause the loss of their valuable opinions. I'm unfamiliar with any actual evidence on the matter.

Next, spending. The comment you link to explicitly says they would not accept 2x funding, which imo puts them heads and shoulders above the default of outside society (e.g. a team at a S&P 500 company, in the government, or at a typical nonprofit). I personally put a fair amount of weight on that kind of signal (I loved that Evidence Action closed down their bussing program for not-enough-impact reasons). I think its quite plausible that the forum's benefit of fostering an EA community creates new EAs and retains old ones to the extent that the value outweighs the $2m cost.

That being said, I think you are probably correct in your own comment in that thread in pointing out there is a margin-average distinction being elided, so the 2m probably really is too high.

That comment also links to a page on how they're trying to have impact. The task they rate as the most promising is running job ads on the forum. The second-most promising is helping recruiters find potential candidates. Those seem reasonably valuable to me, but, I'd still guess the EV is less than $2m.

That being said, there are some ameliorating factors:

  • The whole analysis depends on how much you think EA is money-constrained versus talent-constrained - fwiw Scott leans more towards the latter. FWIW, this takes the cake for the biggest misconception that new-to-EA people have - that money-constraints are the primary issue.
  • Building on that, the budget appears to have absolutely ballooned with the help of FTX funding. If this is true, it's unclear what exactly the counterfactual alternative was - i.e. was this money earmarked specifically for this forum? for community outreach? idk. Certainly, SBF's donations were not entirely effectiveness-driven.

Ultimately, I'm inclined to agree that $2M is too much, without having context on how the budget was determined, I'm not sure how much of a black eye this should be on EA as a whole.

Criminal Justice Reform

When I went through Open Philanthropy's database of grants a couple years ago, I felt only about half its spending would fall under conventional Effective Altruist priorities (e.g. global health, animal welfare, X-risk). That is, I've felt for a couple years that Open Philanthropy is only about half-EA, which, to be clear is still dramatically better than the typical non-profit, but I don't personally them funding a cause as equivalent to the EA community thinking the cause is effective. #NoTrueScotsman

I'm going to be honest - I do not, tonight, have the time to go through the two "alternatives" links with the thoroughness they deserve

The entire purpose of this exercise is to consider how likely the South was to either choose to end slavery on its own or consent to have it chosen for them without bloodshed. The relevant metric, therefore, is how important slavery was to the South.

More concretely, the Civil War depended on individual state governments choosing to secede, so the geographic concentration of slavery in the US is extremely relevant. If slavery was evenly distributed in the US, I

  1. strongly don't think the Civil War would have been on the table to begin with
  2. tentatively think slavery would have been ended in the 1870s or 1880s

Gotcha - on first reading, I misinterpreted it as

To treat liberalism as an inevitable endpoint, or a universal truth, or some manifestation of the underlying laws of the universe; it undermines [the principles and values that] made liberalism triumphant and successful.

which triggered my confusion. Based on what you said, the intention is more along the lines of

To treat liberalism as an inevitable endpoint, or a universal truth, or some manifestation of the underlying laws of the universe; it undermines [the courage, actions, and habits that] made liberalism triumphant and successful.

I think the focus on censorship is misplaced. Censorship is one of the many ways a Culture War can play out, and it is neither necessary nor sufficient for showing a Culture War is occurring. Consider, for instance, a racial minority marching for civil rights. It doesn't matter whether there is censorship - that is quintessentially culture war. Conversely, consider censoring how to make nuclear weapons - that's not a meaningful component of any significant Culture War.

A culture war happens if at least one side decides that the other side is so wrong/dangerous that it needs to be converted

I agree with this. A Culture War is created by two groups of people attaching so much value to a dichotomy that converting the other seems important. So, necessary conditions for a Culture War are

  • A dichotomy [ Edit: or two? I remain agnostic on the extent to which each group can have separate conceptualizations of the dichotomy ]
  • Value placed on the dichotomy enough to prompt both ends to attempt convincing others

To treat liberalism as an inevitable endpoint, or a universal truth, or some manifestation of the underlying laws of the universe; it undermines what made liberalism triumphant and successful.

What does this mean?

The Modelbot that is exactly the kind of thing I'm talking - Sam was exceptionally smart within formal models (epitomized by HFT crypto algorithms) and not really exceptional outside of formal models (e.g. the verbal argument of "what if something goes wrong")

Well, except if you're with a group of people bonding by bullying someone, you perspective implies I should start bullying them too...

ETA: though, I do admit, definitionally that taking the selfish perspective does "better serve" you

I'm saying psychologically health people don't see status as zero-sum.

I don't have to feel like I'm losing status if slaves are freed.

I don't have to feel like I'm losing status if I stop eating meat.

Any feeling that I'm losing status is a feature of my brain, not the world.

some Christian charity dedicated to banning abortion is usually happy to switch method to boost efficiency

I'm not sure this is generally true. I think it's usually fairly difficult for nonprofits to admit a program is ineffective. Indeed, one of the reasons I like Evidence Action so much is that they turned down a program (busing farmers to the city during the off season so they could work) that turned out to be less effective than their other programs (deworming and chlorine in water).

But even acknowledging that non-EA nonprofits do sometimes turn down less-effective programs, my main point is that virtually zero Christians will ask themselves which non-profit will actually be most effective at banning abortion.

What does that mean?... Please be specific.

This is exactly what I wanted him to do, but I was being snarky about it. I'd thank you for being kinder, but...

Have you heard of a guy called "Sam Bankman-Fried?" He was in the news a little bit lately.

A single guy in finance being over-confident is pretty minimal evidence that EAs as a group and as a constellation of organizations suffer from "extreme self confidence".

But it seems outrageously myopic and self absorbed to conclude that the people focused on "Twitter experts" (which include everyone from the media up to the Chief medical advisor to the President) care more about political consequences than the truth

Let me be clearer. I care far more about ensuring I believe the truth than I do about whether society believes the truth. I didn't mean to imply the "power" perspective was bad/wrong/useless.

What's more, you have developed a definition of expert which renders everything you say inscrutable (at best) to everyone else, which is generally only good for sticking your head in the sand, not for engaging in thoughtful conversations.

I care about having accurate beliefs about specific things (e.g. HBD, causal effect of college attendance on income, etc). Because of this perspective, I don't really care if Experts™ are biased. I do care if specific "experts" are biased or have poor epistemological hygiene - though, even there, I care more about whether particular papers are biased.

I don't think this is "sticking [my] head in the sand" - I think it's focusing my "thoughtful conversation" energy elsewhere.

Wrong in the sense that the official data turned out to be false? No. The one time someone gave me concrete anecdotal data - it closely matched the official data. So, I haven't been given any evidence the official data was wrong/falsified.

Wrong in the sense that high inflation ended up persisting higher than I (or the market/Fed) expected? Yes.

Wrong in the sense that I think my approach to the question was worse than the competition. Not really. I don't think "who got that one random prediction right" is a good way to check who will do the best at predicting in general. I will say, my view has become a little more nuanced, but it's fundamentally the same.

I will also note that few on this site (any?) actually gave a concrete predictions on which to judge them, so I'm not even sure any of them did do better. Predicting "inflation will be higher than expected" has a 50% chance of being right by default and even higher if you allow yourself degrees of freedom of what "expected" means.

Depends on your definition of “caring”

¯\(ツ)

As an example, I have a very specific explanation of how my caring has changed. You decided to simply assert that this change doesn’t count as “not caring” to you.

I could practice some “Outside View” and wonder whether you might be right - but then I remember that the Outside View presupposes the other person is actually adding valuable information and not just trying to “win” points at my expense

Past me: I got downvotes; what is wrong with my comment?

Present me: I got downvoted; what does that imply about the community doing the voting?

Specifically, GSS data showed that 63% of young men reported themselves as single while only 34% of young women did. This was of course immediately seized upon as proof that a huge proportion of girls are in “chad harems.” Since nobody bothers to read beyond a sensationalist headline, not many dug deep enough to discover that this proportion has been roughly the same for over thirty years, so if the chadopoly is real, it’s been going on for a long time

When I looked into this, I found that, across all age groups, the implied number-of-non-single people was roughly equal in both sexes. This strongly suggests the factor driving this are a large number of younger-woman-older-man pairs.

I don't think trying to convince an ally's citizens to secede is generally considered a wise geopolitical move. Would you rather

  1. have Canada as a close ally
  2. make the Canadian government dislike us for decades (centuries?) but we get the honor of adding a few millions citizens - citizens who probably average half the US average income, which means they will probably be net-government recipients rather than payers.

Seems like an easy choice to me.