@NunoSempere's banner p

NunoSempere


				

				

				
0 followers   follows 1 user  
joined 2022 September 10 10:19:29 UTC

				

User ID: 1101

NunoSempere


				
				
				

				
0 followers   follows 1 user   joined 2022 September 10 10:19:29 UTC

					

No bio...


					

User ID: 1101

hard-to-evaluate work at any large organization... learn to play the game

You can also be on the lookout for different games to play.

You seem to think it would be better if powerful EAs spent more time responding to comments on EA forum

I think this is too much of a simplification. I am making the argument that EA is structured such that leaders don't really aggregate the knowledge of their followers.

Can you give an example of any multi-billion dollar movement or organization that displays "blistering, white-hot competence"?

Some which could come to mind: Catholic Church in Spain 1910 to early 2000s, Apple, Amazon, SpaceX, Manhattan project, Israeli nuclear weapons project, Peter Thiel's general machinations, Linus Torvald's stewardship of the Linux project, competent Hollywood directors, Marcus Aurelius, Bismark's unification of Germany and his web of alliances, Chicago school, MIT's JPAL (endowment size uncertain though), the Jesuits, the World Central Kitchen.

provided concrete evidence that interventions are less effective than claimed

I discussed a previous one on the Motte here, here is a more recent one: CEA spends ~$1-2M/year to host the equivalent of a medium subreddit, or a forum with probably less discussion than The Motte itself.

offered concrete alternatives to this target audience.

Here are some blue-sky alternatives, Auftragstaktik is one particular thing I'd want to see more of.

For future reference, to replicate something like the above footnotes, write in normal markdown, and then compile to html with discount markdown, and then pasting the html into the Motte. There is also pandoc, which might have bindings The Motte itself could use.

The markdown syntax for footnotes is:

Lorem ipsum dolor sit amet, consectetur adipiscing elit[^footnote], sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. 

[^footnote]: content of the footnote.

Text continues as normal, but footnotes will show up at the bottom. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. 

I've now figured out how to copy over the footnotes. Still, I'd been too lazy for half an hour of editing for the Motte. I'm torn; I see the point of having a costly signal, but at the same time, the signal would have been too costly for me. I guess in some sense I might be some marginal case, so it's for the Motte to decide. At the same time my sense is that half an hour to show the Motte something you are excited about is too high a bar.

I see what you mean. I figured out how to preserve the footnotes, and have copied the text over.

The alternative is not only Open Philanthropy/the NHS/the government listening to people. It's the people organizing themselves civically, independently, and more unconstrainedly. For this you don't need to have barriers to entry of the kinds you are thinking of, you need for the community to not have atrophied a muscle of organizing things of its own initiative, using its own resources, with its own labour. As an example, consider the Informal Anarchist Federation.

"if your argument is just that "The current, top-down model has costs"

I'm arguing on the margin. Yes, the current top model has costs, and I think that on reflection these are much higher than when EA is advertising itself, which should lead to other alternatives looking better on the margin. I'm saying that, if one reflects on these dynamics, for some fraction of people who buy deeply into EA, the costs will have been too high. Maybe the trouble is that I'm not arguing "EA as a whole should", but rather at the level of individuals.

Here is a rant about Effective Altruism. It goes as follows:

  1. I want to better understand in order to better decide
  2. That the structural organization of the movement is distinct from the philosophy
  3. and EA structurally orients itself around one billionaire's money.
  4. In practice, cost-effectiveness estimates keep EA honest, but only for global health
  5. Outside of global health, the leadership of the EA machinery has even more unappealing aspects
  6. ...and EA leadership doesn't display a blistering, white-hot competence
  7. Therefore it might make sense to walk away more often

Unflattering aspects of Effective Altruism

1. I want to better understand in order to better decide

As a counterbalance to the rosier and more philosophical perspective that Effective Altruism (EA) likes to present of itself, I describe some unflattering aspects of EA. These are based on my own experiences with it, and my own disillusionments1.

If people getting into EA2 have a better idea of what they are getting into, and decide to continue, maybe they’ll think twice, interact with EA less naïvely and more strategically, and not become as disillusioned as I have.

But also, the EA machine has been making some weird and mediocre moves, leaving EA as a whole as a not very formidable army3. A leitmotiv from the Spanish epic poem The Song of the Cid is “God, what a good knight would the Cid be, if only he had a good lord to serve under”. As in the story of the Cid then, so in EA now. As a result, I think it makes sense for the rank and file EAs to more often do something different from EA™, from EA-the-meme. To notice that taking EA money carries costs. To reflect on whether the EA machine is better than their outside options. To walk away more often.

2. That the structural organization of the movement is distinct from the philosophy

Effective altruism’s philosophical ideas are seductive: who wants to be less effective? who wants to work on intractable, overgrazed and worthless projects, as opposed to tractable, neglected and impactful problems? But liking the philosophy doesn’t mean you will like the actual movement, or that you should join it. You can have many different kinds of organizational structures corresponding to the same philosophy, and some will be a poor fit for you.

For example, after the 2008 crisis, one could be in favor of reforming the US financial system and holding those responsible for the 2008 crisis accountable, but find Occupy Wall Street deeply disappointing. Historically, there has been huge confusion about this point in EA.4

3. and EA structurally orients itself around one billionaire’s money.

To a first approximation, the structural organization of Effective Altruism is as follows:

  • Dustin Moskovitz, a deca-billionaire, is giving his fortune away through his foundation. His foundation, Open Philanthropy, has a large staff subdivided into cause areas.
  • Organizations are chasing Open Philanthropy’s funding.
  • Rank and file members are seeking to work at organizations with Open Philanthropy funding (“EA organizations”)

There are players who do not fit into this scheme, but I would describe their contribution as marginal. Not as irrelevant, mind you, just as very small in comparison with the Open Philanthropy juggernaut. Still, a few points of nuance:

  • Dustin Moskovitz (ca. $10B) isn’t the only billionaire giving money to the cluster of organizations under the EA banner. There is also Jaan Tallinn (ca. $1B), which gives under various “Survival and Flourishing” funds. More may be coming.
  • There are a few people “earning to give”, or donating independently of Open Philanthropy. The ones I know of are smaller, with a net worth of ca. ~$10M or so.
  • Not 100% organizations or individuals in the EA movement are chasing Open Philanthropy funding.
  • Sometimes, Open Philanthropy doesn’t donate to projects directly but e.g., donates to some Effective Altruism Fund or to the Centre for Effective Altruism, which donates to the final project.
  • etc.5

Still, the decisions of Open Philanthropy end up being decisive. How decisive? Well, Open Philanthropy directs something like 90% of current funding within the EA movement6. So other funders just don’t have as much capacity in comparison. For example, running a 10 person organization in the EA movement really benefits from having backing from Open Philanthropy, because relying on the other funders adds too much uncertainty and volatility. So I’d say that they end up being pretty decisive.

4. In practice, cost-effectiveness estimates keep EA honest, but only for global health

If we have some reliable way of estimating the value of projects, structural organization doesn’t matter that much. You would propose your project, it would be evaluated, and if it was above some cost-effectiveness bar, it would be funded. That is, to a first approximation, what happens within the global health cause area in EA. You can seek to objectively7 estimate the quality-adjusted life years that an intervention saves. You can have an evaluator like GiveWell. And you can have an organization like Charity Entrepreneurship trying to find interventions that would be evaluated favorably by GiveWell.

The situation with animal welfare is a bit messier. Open Philanthropy might be making some quantified estimates, but I don’t recall them being public. And Animal Charity Evaluators, the would-be GiveWell equivalent, doesn’t do quantified estimates of the value of the charities they rank. Still, in principle you could do estimates of value for animal suffering interventions and avoid the problems I outline below.

With longtermism and global catastrophic risks, you don’t have good methods of estimating the value of different interventions, for example of determining that one AI safety research agenda is better than another, or that one AI governance approach is superior. So in practice, you end up relying on the personal judgment of a crowd of amalgamated8 EA leaders for making funding and prioritization decisions.

Historically, Open Philanthropy has been slow to trust people, either as employees or as grantees. So these amalgamated EA leaders have been overworked, busy, unapproachable9. In practice, people go to great lengths to try to approach and socialize with Open Philanthropy employees, like visiting or moving to the very expensive San Francisco Bay area.

That grant-makers are busy and unavailable makes getting access to them hard, because the group has limited available throughput. But say you increase the throughput. Then, if the game and the habits are still to compete for a limited pool of resources, and if there is still infinite demand for free billionaire money, then charismatic grantees close to EA leaders will still out-compete others. Competing for access is still the wrong game to be playing, though, and I resent this; you don’t want to have a pool of talent competing hard for grant-maker attention, you want to have a pool of talent working hard at making the world a better place10.

Consider the sunflower. The sun provides a source of energy; the sunflower evolves to follow it. So with Open Philanthropy and Effective Altruism. I’m then saying that in a sunflower field, flowers who don’t move to track the sun could be out-competed. But tracking the sun is a distraction, an instrumental goal at best.

The same story told from the bottom up is: an aspiring EA starts with the intention of doing large amounts of good, and will try to do something semi-ambitious. Then he’ll find out that funding constraints are a big part of making shit happen. And when solving that funding bottleneck, he will be in a social context where the natural good move is to try to get access and then seduce a busy, overworked, and therefore unavailable coterie of grantmakers11,12. He’ll burn out.

But that’s the wrong game to be playing because if you look at autochthonous EAs, at the rank and file, many are nerds, nerds who are able to do good work but who will find it hard to jockey for access. Their winning move would be not to play, and to gain real power by building something independently.

5. Outside of global health, the leadership of the EA machinery has even more unappealing aspects

Even beyond the sunflower issues, the central EA machinery, at organizations like the Center for Effective Altruism or Open Philanthropy, has other issues that make it unappealing to me as a source of leadership—of guidance, of evaluation, of moral direction:

First, their priorities are different from mine: Open Philanthropy seems fairly committed to worldview diversification, which I consider a mediocre framework. The Center for Effective Altruism cares much more about the reputation of the “Effective Altruism” brand than I do. In general, I get the impression that they want to “be in control”, and reduce variance from people they don’t deeply trust, while at the same time coming to trust people slowly. In contrast, I would prefer to increase formidability, to employ Auftragstaktik.

As a small but very concrete example of the disconnect between my priorities and those of the EA machine, the EA forum has become a worse place for me over the last couple of years; it seems slower, more pushy, more censorious, more paternalistic. It started as a mean lean machine hosting community discussion, and it is now more of a vehicle for pushing ideas CEA wants you to know about. In the process it grew to cost $2M/year (!?!), employ six to eight people. You can see this thought elaborated further here.

Second, I don’t really understand how feedback loops work in Effective Altruism. If someone thinks that Open Philanthropy is making some mistakes, do they ¿write an EA Forum post and hope to get the attention of someone on inside an inner circle? ¿ambush someone at a party? ¿how do they find the party? ¿how do they get heard? Over the past years I’ve had some disagreements with Open Philanthropy around forecasting strategy, worldview diversification, or the wisdom of committing to donate all of Moskovitz’s money before he dies, and I haven’t felt particularly heard.

Third, I feel that EA leadership uses worries about the dangers of maximization to constrain the rank and file in a hypocritical way. If I want to do something cool and risky on my own, I have to beware of the “unilateralist curse” and “build consensus”. But if Open Philanthropy donates $30M to OpenAI, pulls a not-so-well-understood policy advocacy lever that contributed to the US overshooting inflation in 2021, funds Anthropic13 while Anthropic’s President and the CEO of Open Philanthropy were married, and romantic relationships are common between Open Philanthropy officers and grantees, that is ¿an exercise in good judgment? ¿a good ex-ante bet? ¿assortative mating? ¿presumably none of my business?

Fourth, my impression is that the leadership doesn’t see itself accountable to the community, but to their understanding of the philosophy and to the funding source. E.g., Holden Karnofsky, the erstwhile head honcho of Open Philanthropy, for a long time didn’t answer comments on his posts.

Fifth, Open Philanthropy is large enough that it begins to have “seeing like a state” problems, the problems of bureaucracies. It moves slowly, and seems to have an “unfocused glaze”. E.g., it took two years and an extra $100M to exit the criminal justice cause area. Its forecasting grant-making could have used more small experimentation over large grants to existing organizations. For example, Scott Alexander’s grants seem much more exciting than a $8.5 million to Metaculus, but Open Philanthropy chose the $8.5M to Metaculus and warped the forecasting ecosystem and distribution of talent towards Metaculus-shaped things instead of many small experiments14.

So overall, my impression is that the leadership of EA holds a “leadership without consent”, a leadership without much listening and telegraphing one’s priorities so that the leaders can coordinate better with those they lead, and incorporate their perspectives and feedback. It falls on the wrong side of the socialist calculation debate15, and doesn’t compensate enough. And that makes some sense: Open Philanthropy, the main source of funding, is a bureaucracy spun up to spend a billionaire’s wealth according to his16 broad, delegated desires. It would then be surprising if they were able to also skillfully steer and command a 10k strong community, and listen and address their worries, absorb their perspectives. But also as a result, I don’t feel particularly inclined to take my cues from that machinery.

6. …and EA leadership doesn’t display a blistering, white-hot competence

If the EA leadership was, you know, an Arthurian elite which routinely displayed a blistering white hot competence, then I would be more willing to continue pouring my heart and soul into plans of their design in the absence of feedback loops.

But they aren’t, so I’m not.

7. Therefore it might make sense to walk away more often

I see bright-eyed young EAs wanting to roll deeper into the EA rabbit hole and to get employed by EA organizations. They will learn much at first, but later find themselves at the mercy of a machine that can’t hear them. Bad move to walk into that without forewarning. I see the EA machine luring brilliant minds that might be better off trying to amass a small fortune through capitalistic entrepreneurship and then deploying that fortune subject to many fewer constraints. I see people with ambitious visions with their wings clipped because they are illegible to grantmakers, and I think, what good knights they would be, if they had a good lord to serve under.

Perhaps it makes sense to instead do something subtly different from EA, to ignore the implicit vibes and expectations of the EA machine. To sometimes take their funding, but to do your own thing and preserve your ability to comfortably leave. To not serve a billionaire’s notion of the good within a structure with exceedingly poor feedback loops. To notice that if you could do well inside the EA machine, you might do better outside of it. And sometimes, to simply walk away, to burn the remainder of your youth in the pursuit of making the world a better place, outside of EA.


  1. You can read a bit more about what I was trying to do here, and some more reflections here.
  2. That is, I think this blog post could plausibly be useful for individual people reading it, not for EA institutionally to address the aspects I discuss. I don’t think there is an EA entity with the inclination to digest and address these points.
  3. I like bellicose framings, but one could use neutral metaphors instead: “…making mediocre moves, reducing the EA community’s ability to do good together”, or more flowery ones “…making mediocre moves, reducing the EA’s community to flourish and give birth to valuable projects.”
  4. Incidentally, this is why providing criticism of EA is not a catch-22 where you thereby “are” “an EA”, or “are doing effective altruism”. In particular, you can agree with some of the philosophical attitudes and positions of Effective Altruism, without thereby having to pledge allegiance to the EA machine.
  5. E.g., technically, Open Philanthropy is its own thing, and the vehicle for Moskovitz’s donations is Good Ventures. But who cares.
  6. For example, per here, Open Philanthropy donated $450M in 2021. Did other sources of funding cumulative add to more than $45M? My guess is no, and that the distribution of funding is steep. For example, Jan Tallinn donated $23M in 2021. So the EA movement wouldn’t literally be a monopsony, but still, because capital is so concentrated, it seems like capital has much more power compared to labour.
  7. There are going to be some free variables, e.g., around what the “exchange rates” or conversion factors between money, illness and death should be, or around how to value a young person’s life vs an older person’s. But you can be transparent and predictable about how you will resolve these ambiguities.
  8. these are going to be grant officers at Open Philanthropy, but also EA Fund managers, people in charge of hiring decisions at CEA and at large EA organizations, and so on.
  9. Readers are also welcome to hypothesize what dynamics arise when trust is scarce. Perhaps promotion to incompetence across the people that are trusted? Or exacerbation of inner circle dynamics?
  10. You can solve this problem by having grant-makers be anonymous. Here is a robinhansonian design: have a cohort of anonymous regrantors and allow members of the public to make $20k bets at 1:2 odds on whether any one particular person is a grant-maker. This ensures that your regrantors will remain anonymous. Anonymous philanthropy has precedents, see e.g., here.
  11. Doesn’t seem like a great attachment theory setup.
  12. Incidentally, having romantic relationships with Open Philanthropy employees increases access to that coterie. That is, I suspect that having a close relationship with Open Phil people privileges the hypothesis that your grant is worth evaluation.
  13. For some confirmatory evidence, note that Luke Muehlhauser, an Open Philanthropy grantmaker, is a board member at Anthropic.
  14. I find it interesting that when he left Open Philanthropy to start the FTX Future Fund, Nick Beckstead (with others) designed it to look completely different than the Open Philanthropy model: trusting independent and eclectic expert regrantors to make grants according to their judgment, evaluated on their performance, rather than hierarchies of grantmakers each restricted to a cause or sub-cause.
  15. See here for a more libertarian perspective which disagrees in emphasis with the Wikipedia page.
  16. and his wife’s

German grammar

Actually, now that I think about it, German has the feature that in composite phrases (i.e., most phrases saying anything complicated), the verb is at the end. This makes sentences messier. It's possible that having strong categories could be a crutch to make such long sentences understandable.

Not sure to what extent that is a just-so story, though.

German grammar

I am not talking about grammar, I am talking about speech as practiced.

almost entirely negative German stereotypes

based on experience about a specific caste/subgroup of Germans. I contend this is valid, in the same way that, e.g., talking about Puritan ethics or values or attributes is valid. I could go on about the positive aspects, but the negatives are more salient, since we are talking about the limitations of language, rather than, e.g., the benefits of discipline.

and the example of a fictional Frenchman from a 19th century novel

also a 1892 book, in case you find that more persuasive. You might find the Google translation of the title a bit interesting. But I think that the Javert example captures the core intuition. If you are a Javert kid, surrounded by Javert parents and Javert peers which utter Javert phrases, it's pretty intuitive to me how you will grow to mimick those utterances.

Just Look At the Germans. The way these minds are shackled by man-made categories was really obvious to me, as a foreigner from Spain:

  • In a charity I was volunteering, they made emphasis in having processes, structures, sub-groups responsible for categories of work. Sadly, despite this, not much got done.
  • Their morality is base on some concept of what is MORALLY CORRECT that doesn't leave much place for uncertainty. Sure, let's shut down nuclear plants and crippling the economy and industrial base, because it is MORALLY CORRECT. Let's vote for the Greens, because they are the MORALLY CORRECT party.
  • You wouldn't cross an empty street when the traffic light is red, even if you can see that there aren't any cars coming, because it wouldn't be MORALLY CORRECT
  • Look at the way Switzerland's nuclear weapons programme went: they established a subcomittee to study the possibility, and when that didn't work, they established a second subcomittee, which produced a report, which... you get the idea.
  • The way you learn math is by understanding a finite list of concepts and methods, going subject by subject
    • Rather than by having a problem and looking for an algorithm/tooling/approach which solves it.
  • To understand language and communication, you differentiate between sense and meaning; you seek to understand language by presenting categories for it.
  • Consider Javert from Les Miserables. He is hunting the sympathetic protagonist because he is A CRIMINAL, and criminals are DANGEROUS TO SOCIETY and must be BROUGHT TO JUSTICE.

In a stylized way, there is a common way of being amongst Germans which is something like, implicit Aristotelianism? There are categories, which are so robust that they need not be questioned, and which can be a source of comfort and security in this uncertain world. This is why we should choose a subcomittee to address the subcategory of Strategic Dialogue, which is different from Cooperative Dialogue (of which a different committe is responsible).

To be clear, though, I admire some parts of it, like the work ethic, the strong economy (particularly compared to my more chill Spain), the part of their moral structure that ends up helping other people. Also, do note that this is just one subculture in the geographical Germany.

So, throughout, what alternatives could my stylized German be missing?

  • Deep understanding (vs shallow understanding based on classification)
  • Employing categories as shortcuts (vs as pillars, as fundaments)
  • Rules as constraints that can sometimes be bent (vs as MORALLY CORRECT commandments)
  • Finding approximate solutions through brute force and simulations (vs analytic solutions through applying a finite list of manipulations)
  • Moral relativism (as opposed to moral realism)
  • The Israeli nuclear weapons programme (as opposed to the Swiss)
  • Not having a stick up your own ass (as opposed to having a stick up your own ass)

Now, there is a question, which part of this is language, and which part of this is culture? Yeah, I mean, you can definitely have a chill German, but the tradition, the language games, the way language is used in practice by the richer social strata, the utterances that people make in practice and that they grow up with, do contain and transmit these blindspots.

I've been doing ok redirecting yt automatically to Invidious in my custom browser. On Firefox I'm using LibreRedirect: https://libredirect.codeberg.page/. For music I'm using yt-dlp. Not much of a plan, though.

Big fan of your writings.

Just leaving a quick note that I don't understand why you are hosting these in LW rather than in your own site & linking to them. It seems that you don't have that much control over what the LW people do, and e.g., having your own rss would be a good preventative measure.

Nice post, thanks for writting it

Bismark Analysis has a pretty great analysis of Soros here: https://brief.bismarckanalysis.com/p/the-legacy-of-george-soros-open-society, which might be of interest.

You could also choose nuclear energy, better vaccines & pandemic prevention, better urban planning. etc. Or even in education, things like Khan Academy, Wikipedia, the Arch Wiki, edx, Stack Overflow,... provide value and make humanity more formidable. Thinking about those examples, do you still get the sense of pessimism, almost defeatism in your previous comments?

Mmh, I see what you are saying. But on the other hand, there is such a thing as a Pareto frontier. Some points on that pareto frontier, such that you can't fulfill more needs without sacrificing previous gains, might be:

  • monomaniacal formidability. You are a titan of industry and you to ignore your family because you just care that much about, idk, going to Mars.
  • a life of bucolic contemplation and satisfaction.
  • a flourishing family-values life, caring for your children and the members of your clan
  • a life of hedonism, enjoyment and vice
  • etc.
  • some mix of the above, e.g., having a good career AND a family AND having fun AND ...

Like, if I look at my actions, I don't get the impression that I'm on any kind of Pareto frontier, where, idk, listening more to my in-the-moment curiosity trades off against the success of my romantic relationships, which trades off against professional success. It seems like I could just be... better on all fronts? Contradictorily, there is a sense in which I am "doing the best I can at every given moment", but it feels incomplete, and doesn't always ring true. Sorry for the rambling here.

For your example, making your same comment in the morning seems like it could plausibly be a better choice.

I think your first priority should be in finding reliable ways to prioritise and focus on long-term goals.

Yeah, maybe. My discount rates have increased a bunch after the fall of FTX, since their foundation was using some of the tools I was working on for the last few years. So now I'm a bit more hesitant about doing longer term stuff that relies on other people, and also, sadly, longer term stuff in general.

I think military greatness is a red-herring here: I don't think that it's a realistic shot at greatness for readers here. Starting a religion, or a billion-dollar startup, or a social/political movement seems much easier.

Maybe I'm just rehashing 'good times breed weak men, weak men make harsh times'.

Maybe so, but it's a useful handle nonetheless.

Elon Musk and Dominic Cummings are the closest we have to the great men 'type' today, aiming for performance above all else. Elon Musk is widely hated and disliked by the usual suspects in government and acceptable society

I'm not sure about them two. I prefer Peter Thiel as an example. He was:

  • able to shut down Gawker
  • able to create several scalable companies: Paypal, Palantir, Founders Fund
  • able to spread his worldview around, through books, the Thiel Fellowship, etc.
  • able to make multi-year political plans (endorse Trump, give very high salaries to people who could later run for office, to get around spending limits), even if these didn't work out (Trump doesn't seem to have consulted him for much, his candidates didn't win the elections)

and like, these aren't world-changing, but he's still got time, and he isn't constrained by fickle political winds.

I mean, we don't have a small number of clearly achievable goals, but if you pick N major human drives, the question then becomes why aren't we better at attaining all the major human drives, and formidability would just be a shorthand for becoming better across all these dimensions. But I'd in fact think that excellency in various domains does correlate.

On top of this, being formidable for long enough, in an impressive enough position for people to take note, requires huge amounts of luck over a long period

Sure, but we don't see that many people taking their shot at greatness come what may rather than wasting away in their cubicle jobs.

Respectfully disagree. Though it's hard to say whether we do disagree in substance. Maybe you think that trying to be maximally ambitious is always misguided, and I'd agree that being misguided + maximally ambitious is not something to be admired? idk.

A. There is a heap of inertia B. Enthusiastic people with a grand plan are working in fields which already have inertia C. Therefore enthusiastic people which have a grand plan will be bogged down in that previously existing inertia.

I mean, sure. But then the answer would seem to not work inside fields which already have huge amounts of negative inertia: to try to explore new fields, or to in fact try to create a greenfield site. To give a small example, the Motte does happen to be its own effort, and thus seems less bogged down. Or, many open source projects were started pretty much from scratch.

Any thoughts on why people don't avoid fields with huge amounts of inertia? Otherwise the inertia hypothesis doesn't sound that explanatory to me.

Why are we not better, harder, faster, stronger

Now here: https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/ (on the motte here: https://www.themotte.org/post/593/why-are-we-not-harder-better). I'm curious to get your perspective.

Breezewiki is good. And in general, OP might want to look into https://github.com/libredirect/browser_extension

Updating in the face of anthropic effects is possible

Now here: https://nunosempere.com/blog/2023/05/11/updating-under-anthropic-effects/. Pasting the content to save you a link:

Status: Simple point worth writting up clearly.

Motivating example

You are a dinosaur astronomer about to encounter a sequence of big and small meteorites. If you see a big meteorite, you and your whole kin die. So far you have seen n small meteorites. What is your best guess as to the probability that you will next see a big meteorite?

In this example, there is an anthropic effect going on. Your attempt to estimate the frequency of big meteorites is made difficult by the fact that when you see a big meteorite, you immediately die. Or, in other words, no matter what the frequency of big meteorites is, conditional on you still being alive, you'd expect to only have seen small meteorites so far. For instance, if you had reason to believe that around 90% of meteorites are big, you'd still expect to only have seen small meteorites so far.

This makes it difficult to update in the face of historical observations.

Updating after observing latent variables

Now you go to outer space, and you observe the mechanism that is causing these meteorites. You see that they are produced by Dinosaur Extinction Simulation Society Inc., that the manual mentions that it will next produce a big asteroid and hurl it at you, and that there is a big crowd gathered to see a meteorite hit your Earth. Then your probability of getting hit rises, regardless of the historical frequency of small meteorites and the lack of any big ones.

Or conversely, you observe that most meteorites come from some cloud of debris in space that is made of small asteroids, and through observation of other solar systems you conclude that large meteorites almost never happen. And for good measure you build a giant space laser to incercept anything that comes your way. Then your probability of of getting hit with a large meteorite lowers, regardless of the anthropic effects.

The core point is that in the presence of anthropic effects, you can still reason and receive evidence about the latent variables and mechanistic factors which affect those anthropic effects.

What latent variables might look like in practice

Here are some examples of "latent variables" in the real world:

  • Institutional competence

  • The degree of epistemic competence and virtue which people who warn of existential risk display

  • The degree of plausibility of the various steps towards existential risk

  • The robustness of the preventative measures in place

  • etc.

In conclusion

In conclusion, you can still update in the face of anthropic effects by observing latent variables and mechanistic effects. As a result, it's not the case that you can't have forecasting questions or bets that are informative about existential risk, because you can make those questions and bets about the latent variables and early steps in the mechanistic chance. I think that this point is both in-hindsight-obvious, and also pretty key to thinking clearly about anthropic effects.

would pay for some of them if not for my desire to be anonymous

Happy to be paid in monero. You can reach out to me at nuno.semperelh@protonmail.com with a burner account.

My consulting rates are now here: https://nunosempere.com/consulting/. I'll put up a list of bounties in a while; if you are particularly interested, I have an RSS endpoint here: https://nunosempere.com/blog/index.rss (or you could sign up per email, if you are a wimp: https://nunosempere.com/.subscribe/)