@sodiummuffin's banner p

sodiummuffin


				

				

				
2 followers   follows 0 users  
joined 2022 September 05 03:26:09 UTC

				

User ID: 420

sodiummuffin


				
				
				

				
2 followers   follows 0 users   joined 2022 September 05 03:26:09 UTC

					

No bio...


					

User ID: 420

The U.S. produces 51.5 million acres of hay and 37.3 million acres of wheat per year. So setting aside all other forms of animal feed, more land goes to producing hay alone than to wheat.

However, I think that Zeke was referring to small mammals getting killed during harvesting, which my googling suggests is more due to increased predation from loss of cover than getting chewed up by machinery.

Which is why I'm pointing out that raising cattle at scale involves harvesting even more land. Estimating the effects on animals from cropland is difficult, but it's not a comparison that favors beef to begin with.

Sorry, I was going off half-remembered information about how "grass-fed" labeling is meaningless in some countries. A more relevant point is that grass-fed labeling includes food sources like hay, which still have to be harvested, which brings us back to the inherent thermodynamic inefficiency of feeding another animal so you can later eat its meat.

And clearing jungle for pasture is a net improvement for animal welfare, because jungles are obscene murder temples of pure agony, while well-tended pastures are grass and flowers and a few voles (if you don't care about insects).

I was responding based on his assumptions that areas like cropland are bad for animals, rather than being good because they involve creating areas where fewer animals are born into lives of suffering. Yes, with the right set of moral assumptions you can view every animal born into the wild as a bad thing, which would be a point in favor of anything that involves using lots of land in a way that leads to a low density of animal life. But once you're considering things at that level of indirect effects, you should also consider that using resources and land to raise cattle trades off against using it in other ways. Strip-mines and suburbs don't have a high density of animals either, even tree farms aren't that high, it's difficult to predict the effects on land use if people redirected money from meat to something like housing.

In the sufficiently long term the biggest effect might be on social attitudes, as humans gain more and more power over the environment a society in which ethical vegetarianism is the norm also seems more likely to care about wild animal suffering and act accordingly. (Like those ideas regarding genetically-engineering wild animals to reduce their suffering.) If nothing else wild animals with brains capable of suffering are already becoming a smaller percentage of Earth's population, so the average welfare of animals (including humans in the average) is increasingly driven by whether humanity continues to scale up the population of animals we raise for slaughter alongside our own population. For instance look at Earth's distribution of mammal and bird biomass - obviously neither mammals or biomass are the metrics we care about, but it gives a sense of the trend.

But zeke5123 is talking about accidentally killing animals as part of growing and harvesting crops, not optimal land use. That seems like it would be similar per-acre whether you're growing alfalfa or wheat.

It's a completely different subject but I'm reminded of Scott's 2015 post about California's water crisis:

https://slatestarcodex.com/2015/05/11/california-water-you-doing/

34 million acre-feet of water are diverted to agriculture. The most water-expensive crop is alfalfa, which requires 5.3 million acre-feet a year. If you’re asking “Who the heck eats 5.3 million acre-feet of alfalfa?” the answer is “cows”. A bunch of other crops use about 2 million acre-feet each.

All urban water consumption totals 9 million acre-feet. Of those, 2.4 million are for commercial and industrial institutions, 3.8 million are for lawns, and 2.8 million are personal water use by average citizens in their houses.

Which leads to interesting calculations like this:

The California alfalfa industry makes a total of $860 million worth of alfalfa hay per year. So if you calculate it out, a California resident who wants to spend her fair share of money to solve the water crisis without worrying about cutting back could do it by paying the alfalfa industry $2 to not grow $2 worth of alfalfa, thus saving as much water as if she very carefully rationed her own use.

But in any case the question of whether alfalfa is worth the resource usage has little to do with zeke5123's objection.

Industrial farming of animals requires feeding them, and thanks to thermodynamics this is dramatically less efficient than growing food for humans directly. (Theoretically you can raise some grass-fed cattle on grassland that already exists without clearing new land but this does not scale and still kills the cattle themselves. Note that labeling beef as "grass-fed" does not mean they get their food exclusively from pasture, it includes feeding them hay which itself has to be harvested.) You don't need to throw up your hands and act like there's no way to know if there's more animal death/suffering required for beef or bread, various rough estimates like this are enough to show the intuitively obvious answer is correct.

That's a fairer defense, and the Ontario Pension Plan admins should be facing serious scrutiny, if not potential review of their licensing (if they have any), as should any who make serious crypto investment with other people's money and no extremely clear disclosure.

Should they though? I think the standard should be higher for institutional investors than for charities accepting donations, but that doesn't necessarily mean the standard for investors should be significantly higher than it already is. They're an easy target because they're partially sponsored by the government, but they were just doing the same thing that the entirely private investors were doing. And the private investors have an appetite for risk because ones that were too risk averse would get outcompeted and replaced in their roles by ones that pursued a more successful strategy. Sequoia Capital is a 50-year-old firm managing $85 billion, and while you could speculate that their employees have recently become less competent or too reckless it seems perfectly plausible that their decision-making here was just the same kind of decision-making that led to these investments:

Notable successful investments by Sequoia Capital include Apple, Cisco, Google, Instagram, LinkedIn, PayPal, Reddit, Tumblr, WhatsApp, and Zoom.

Meanwhile, charities accepting donations both have less to lose, since rather than outright losing an initial investment there's just any money/time wasted by planning around future funding that doesn't come and vague reputational concerns potentially affecting future donations, and more to gain, since you're outright getting money for nothing rather than trying to get a return for money you already have. There's a direct tradeoff between the two, if it's 35% of your funding you risk having wasted more money if it evaporates, while Sequoia obviously doesn't invest that much in a single company - but if you refuse you know you're out a whole 35% of your potential funding, whereas Sequoia can just invest their money in something else. If it's 100% of your funding because you've been soliciting funding for your new charity and they're the first donors to say yes, there's certainly a risk the money will dry up and destroy your charity if you can't find a substitute, but if you refuse there's a risk you won't find enough donations to begin with. You talk about it killing charities, but if a sudden loss of funding can do that how much is because of "less funds than expected" vs. just "less funds, same as if you had refused"?

An EA organization has to... look at other funders? Which, according to Scott's claims here, were already thirsty for good causes to give money to?

The reason why there was more funding than EA charities knew what to do with in the short term was because of FTX suddenly showing up and throwing around a bunch of money, if everyone had refused that wouldn't have been the case. If those other donors don't materialize for the current funding crunch would they have done so to begin with?

It seems like the tradeoffs here pretty strongly favor not being particularly picky about who you accept donations from. Sure if you know someone obtained money from criminality you don't accept the money, but if a dozen institutional investors and the police/SEC don't have a problem then why should you? Now, you could try to mitigate risk in other ways than refusing money outright, like saving more of the money rather than finding ways to spend it immediately, or better yet persuading them to give you a larger endowment rendering you more self-sufficient. But obviously this might not be possible and carries significant disadvantages, for one donors (especially EA donors) want to see actual results from their donations and evaluate your performance, not "we'll do some charity with this money someday". It transfers the risk of the donor having problems to a risk of the charity having problems, like becoming The Wikimedia Foundation with an enormous pile of cash and a huge stream of donations coming in while meanwhile only a tiny fraction gets spent on anything of value. That is after all one of the big problems EA sought to address, and unlike an incompetent/fraudulent for-profit company which eventually collapses to remove the problem, an incompetent/fraudulent charity can continue to waste people's donations indefinitely. I'm not saying that no improvement is possible, for instance maybe there are measures to be more resilient in case funding is lost, but I don't think it justifies extremely costly measures like outright refusing funding because the donor is in a risky field, and I don't think it reflects some deep problem with EA.

Your link is mangled and goes to a random /r/funny thread, seemingly due to some combination of trying to link an image hosted on Reddit and old.reddit.com.

Interesting. I thought it might correlate with being a lower-trust society and surveys like these, especially because of the stereotype of Russians being vocally cynical, but maybe not. Though I probably shouldn't conclude anything from non-randomized social media polls.

Even the real surveys are dubious (different countries probably radically differ in how they interpret the question, especially when it's being translated) and looking at the link above Russia isn't as low on them as I thought. For instance 23.3% of surveyed Russians agreed with "most people can be trusted", which is lower than the U.S. (39.7%) or Sweden (63.8%) but slightly higher than France (18.7%) or Spain (19%), let alone Brazil (6.5%) or Zimbabwe (2.1%). It's hard to tell how meaningful any of this is.

I addressed this in the footnote.

But it's not true that "for the energy generated, more radiation is given out by fly ash". You didn't say "so long as nothing goes wrong", so the average amount of radiation released per energy produced includes the risk of disaster. And since nuclear power plants involve significantly radioactive material and coal plants don't, even a tiny risk is enough to push the average way above coal plants. The fact that Fukushima alone released more radioactivity than the fly ash we would get from burning all coal deposits on Earth makes this clear.

It is a quite common myth that living near a nuclear power plant emits radiation during ongoing operations.

Then just say "nuclear power plants release virtually no radiation under normal operation". Don't try to make it sound like nuclear beats coal in terms of radiation, on a technicality sufficiently narrow that both you and the Scientific American article you link (and the people I've seen bring up this talking point before) stumble into outright falsehood. Nuclear beats coal on plenty of metrics, there is no need to compare them in terms of radioactivity besides the appeal of being counterintuitive.

Not a reaction of someone who is not even slightly worried.

Sure it is. Yudkowsky is exactly the sort of person who would be outraged at the idea of someone sharing what that person claims is a basilisk, regardless of whether he thinks the specific argument makes any sense. He is also exactly the sort of person who would approach internet moderation with hyper-abstract ideas like "anything which claims to be a basilisk should be censored like one" rather than in terms of PR.

Speaking or writing in a way where it's difficult to use your statements to smear you even after combing through decades of remarks is hard. It's why politicians use every question as a jumping off point to launch into prepared talking-points. Part of Yudkowsky's appeal is that he's a very talented writer who doesn't tend to do that, instead you get the weirdness of his actual thought-processes. When presented with Roko's dumb argument his thoughts were about "correct procedure to handle things claiming to be basilisks", rather than "since the argument claims it should be censored, censoring it could be used to argue I believe it, so I should focus on presenting minimum attack-surface against someone trying to smear me that way".

https://archive.is/nM0yJ

Again, I deleted that post not because I had decided that this thing probably presented a real hazard, but because I was afraid some unknown variant of it might, and because it seemed to me like the obvious General Procedure For Handling Things That Might Be Infohazards said you shouldn't post them to the Internet. If you look at the original SF story where the term "basilisk" was coined, it's about a mind-erasing image and the.... trolls, I guess, though the story predates modern trolling, who go around spraypainting the Basilisk on walls, using computer guidance so they don't know themselves what the Basilisk looks like, in hopes the Basilisk will erase some innocent mind, for the lulz. These people are the villains of the story. The good guys, of course, try to erase the Basilisk from the walls. Painting Basilisks on walls is a crap thing to do. Since there was no upside to being exposed to Roko's Basilisk, its probability of being true was irrelevant. And Roko himself had thought this was a thing that might actually work. So I yelled at Roko for violating basic sanity about infohazards for stupid reasons, and then deleted the post. He, by his own lights, had violated the obvious code for the ethical handling of infohazards, conditional on such things existing, and I was indignant about this.

Okay, but most people want to classify the guy who wakes up tomorrow with their memory and personality as being themselves. (Or rather a sufficiently similar memory and personality, since those change over time.) If many-worlds is true and the worlds literally exist, then each instant you're splitting into countless copies, all of whom have your memory/personality/continuity-of-consciousness. Under your interpretation none of them are the same person they were, so nobody is the same person from moment to moment. Which doesn't seem like a terribly useful definition of selfhood.

Your argument is backwards, most of the probability mass with conscious humans will be in those world's where immortality is nice and easy, but I know which world I live in now.

The chance of quantum fluctuations repeatedly keeping you barely alive through random chance is incredibly unlikely, far more unlikely than them resulting in a world where someone develops the necessary technology faster than you think is plausible. In his scenario you're lying "with third degree burns buried in a garbage dump", that means we need absurd quantum events happening continuously for years to prevent you dying of shock, infection, suffocation, starvation, etc. Each unlikely event multiplies the improbability further. Even under the logic of quantum immortality, this only matters if they're the only branches where you survive. Far more probable is that, for instance, quantum fluctuations in some neurons results in someone trying the right ideas to develop an AI that can do superhuman medical research or develop brain-uploading. Indeed, even if it was somehow truly unreachable through normal research, I think it would be more likely that fluctuations in a computer's RAM result in file corruption that happens to correspond to a functioning file containing correct information on the required technology. Because at least that only really has to happen once, rather than happening again and again in the conventional form of quantum immortality. Eventually the sun is going to expand into a red giant and similarly worlds where you survive through your society developing space-travel are going to dominate worlds where you survive being inside the sun through unlikely quantum events happening many times per second.

Also, more importantly I don't see why if by the Born rule I end up in a world where I am dead, I won't just be dead. There is nothing in physics that says that option is off limits; though, of course, other copies would still exist in agony.

The premise of quantum immortality is that if 1+ copies of you still exist, then you are still alive even if you no longer exist in the vast majority of worlds. If many-worlds is true and corresponds to worlds that are all "real", then there will virtually always be surviving copies. You don't "end up" in any individual world, all the copies diverging from your current self which haven't been destroyed (or altered in ways you consider incompatible with being yourself) are you.

It's not necessary to the argument but I would argue that under a sensible definition some of the copies that have already diverged are you as well. People don't consider it death when they get drunk and don't retain hours of memories. This isn't too relevant now but it's potentially relevant to a future self on the verge of death, since under that definition most of your selves that survive are ones that already diverged, rather than more obvious but unlikely quantum immortality scenarios like "in some worlds your brain is preserved in a freak accident and then used to reconstruct your mind centuries later". But ultimately these definitions are an arbitrary decision, humans intuitions regarding wanting to live aren't well-equipped to deal with multiple future selves in the first place, whether due to many-worlds or something like multiple software copies. However under many-worlds you can't just go with the "my current brain is me and copies aren't" option, because all your future selves are copies diverging from your current self.

Also, real life is nowhere near as clean as these hypotheticals, and focusing more on safety has many negative knock-on effects elsewhere.

Sure, that's the cost of using real-life comparisons, but do you really think that's the only thing making some of those tradeoffs worthwhile? That in a situation where it didn't also affect economic growth and immortality research and so on, it would be immoral to accept trades between even miniscule risks of horrific consequences and very small dispersed benefits? We make such tradeoffs constantly and I don't think they need such secondary consequences to justify them. Say someone is writing a novel and thinks of a very slightly better word choice, but editing in the word would require typing 5 more letters, slightly increasing his risk of developing carpal-tunnel, which increases his risk of needing surgery, which increases his risk of the surgeon inflicting accidental nerve damage that inflicts incredibly bad chronic pain the rest of his life equivalent to being continuously tortured. Yes, in real life this would be dominated by other effects like "the author being annoyed at not using the optimal word" or "the author wasting his time thinking about it" - but I don't think that's what is necessary to make it a reasonable choice. I think it's perfectly reasonable to say that on its own very slightly benefiting your thousands of readers outweighs sufficiently small risks, even if the worst-case scenario for the edit is much worse than the worst-case scenario for not editing. And by extension, if you replicated this scenario enough times with enough sets of authors and readers, then long before you got to 3↑↑↑3 readers enough authors would have made this tradeoff that some of them would really have that scenario happen.

While the number 3↑↑↑3 is obviously completely irrelevant to real-life events in our universe, the underlying point about scale insensitivity and tradeoffs between mild and severe events is not. Yudkowsky just picked a particularly extreme example, perhaps because he thought it would better focus on the underlying idea rather than an example where the specifics are more debatable. But of course "unlikely incident causes people to flip out and implement safety measures that do more damage than they solve" is a classic of public policy. We will never live in a society of 3↑↑↑3 people, but we do live in a society of billions while having mentalities that react to individual publicized incidents much like if we lived in societies of hundreds. And the thing about thinking "I'd never make tradeoffs like that!" is that they are sufficiently unavoidable in public policy that this just means you'll arbitrarily decide some of them don't count. E.g. if the FDA sincerely decided that "even a single death from regulatory negligence is too much!", probably that would really mean that they would stop approving novel foods and drugs entirely and decide that anyone who died from their lack wasn't their responsibility. (And that mild effects, like people not getting to eat slightly nicer foods, were doubly not their responsibility.)

Many people enjoying a game is (imo) much more significant than many people getting dust specks, while a few people getting skin cancer is much less significant than one person getting tortured for 50 years.

But it isn't nullifying their enjoyment of the game, it's a slight barely-noticeable flicker in the broadcast. (If you want something even smaller, I suppose a single dropped frame would be even smaller than a flicker but still barely noticeable to some people.) If you're making media for millions of people I think it's perfectly reasonable to care about even small barely-noticeable imperfections. And while the primary cost of this is the small amount of effort to notice and fix the problem, this also includes taking minuscule risks of horrific costs. And it isn't a few people getting skin cancer, it's the fraction of the people who get skin cancer that then have something go wrong with surgery such that they suffer torture. I just said torture during the surgery, but of course if you multiply the number of planets enough you would eventually get high odds of at least one planet's broadcast operator suffering something like the aforementioned ultra-severe chronic pain for a more direct comparison.

Genuinely, even going so far as to write out a company policy for that ridiculous scenario (where 3^^^3 people risk skin cancer) would mean asking all of your employees to familiarize themselves with it, which would mean wasting many lifetimes just to save one lifetime from skin cancer.

Feel free to modify it to "making a design tradeoff that either causes a single dropped frame in the broadcast or a millisecond of more-than-optimal sunlight on the broadcast operator", so that it doesn't consume the operator's time. I just chose something that was easily comparable between a single operator making the choice and making the choice for so many operators that the incredibly unlikely risk actually happens.

Would you choose Maximal Suffering above Maximally Miniscule Suffering?

Sure. Same way that if I had a personal choice between "10^100 out of 3↑↑↑3 odds of suffering the fate you describe" and "100% chance of having a single additional dropped frame in the next video I watch" (and neither the time spent thinking about the question nor uncertainty about the scenario and whether I'm correctly interpreting the math factored into the decision), I would choose to avoid the dropped frame. I'm not even one of the people who finds dropped frames noticeable unless it's very bad, but I figure it has some slight but not-absurdly-unlikely chance of having a noticeable impact on my enjoyment, very much unlike the alternative. Obviously neither number is intuitively understandable to humans but "10^100 out of 3↑↑↑3" is a lot closer to "0" than to "1 out of the highest number I can intuitively understand".

See my post here. And note the document is from a completely different photo shoot.

The starting point was you saying that people who aren't white nationalists don't "care about white people", and that the reasons for this are sufficiently obvious that even people with drastically different beliefs about the world wouldn't disagree with white nationalism otherwise. You're now talking about how allowing even highly selective non-white immigration could result in intermarriage that results in...the white population ending up with some fraction of a percent of east-asian ancestry? I'm not seeing how this is harmful, and I certainly don't think it is so self-evidently harmful that even people who disagree with you realize it is harmful.

Meanwhile, on a timeframe like that there are far more important factors to focus on. Obviously there are the non-selective forms of immigration, and the large racial minorities that already live in most majority-white countries. There is dysgenic evolutionary pressure costing around 1 IQ point per generation (along with lower conscientiousness, more ADHD, etc.), because modern society is currently set up so that the more successful you are the fewer children you have. And there is the rapidly-arriving promise of technologies like embryo selection or hypothetical future genetic engineering (or simply getting over the eugenics taboo and doing large-scale sperm donation), potentially allowing whichever group is willing to do it to tremendously improve themselves.

How many music videos actually have written or scripted reviews? Reaction videos have a lower barrier to entry than a blog post or scripted video, since you just have to watch and say what you're thinking, but a higher barrier and probably more detail than a Youtube comment. So if someone wants to hear what someone else thinks of a particular music video, they might be pretty much the only choice available. Also if a youtuber or streamer already has an audience they might be interested in what he has to say about something, even if they aren't very interested in the actual subject matter and it's low-effort content.

No, I meant to reply to cake's OP comment.

The proponents were saying 'let's get rid of Saddam it'll be easy and stabilize the Middle East, spread democracy, make new allies...'.

Helping Iraqis and the Middle East doesn't significantly materially strengthen the U.S., it's expending U.S. resources and power for the sake of charity. This is inherently self-limiting, the U.S. has resources to waste on things like this but in the end it is left with less capability to wage war than it started with. Having Iraq as an ally or vassal was never going to be valuable enough to be worth a war, even if it was as easy as proponents thought it would be, and proponents of the war instead justified the war in terms of humanitarian (Saddam, democracy) or threat-reduction (WMDs) concerns. And the U.S. didn't even really turn Iraq into a vassal, it's a democracy that has been at times vocally critical of the U.S. and there is no guarantee that U.S./Iraq relations won't worsen further in the future. It would have been far easier to turn it into an ally in some other way, like buddying up to Saddam or replacing him with some other dictator. Proponents of the Iraq war didn't say they would turn Iraq into a vassal, they said they would turn it into a democracy, and that is indeed what they did. It was the opponents of the Iraq war who said the U.S. would materially benefit, the "No blood for oil" people, but that was never remotely realistic and the proponents didn't say it was.

Off-world colonies don't need AI, and self-sustaining ones take an absolute sledgehammer to every X-risk except AI and dystopia (and aliens and God, but they're hardly immediate concerns). Dumb incentives for bio research can be fixed (and physics research, if and when we get to that). Dysgenics yields to PGT-P and sperm donors (although eugenics has some issues of its own).

Sure, but of course such measures being possible doesn't mean they'll actually be done.

Hell, even GOFAI research or uploads aren't likely to take much over a century, and would be a hell of a lot safer than playing with neural nets

This seems like too much certainty about the nature and difficulty of the task, which in turn influences whether significant delay actually increases the odds of success. For instance, if we turn out to live in a universe where superhuman AI safety isn't that hard, then the important thing is probably that it be done by a team that considers it a serious concern at all. Right now the leading AI company is run by people who are very concerned with AI alignment and who founded the company with that in mind, if we ban AI development and then the ban gets abandoned in 30 years there's a good chance that won't be the case again.

A candidate for such a universe would be if it's viable to make superintelligent Tool AIs. Like if GPT-10 can mechanistically output superhuman scientific papers but still doesn't have goals of its own. Such an AI would still be dangerous and you certainly couldn't release it to the general public, but you could carefully prompt it for papers suggesting more resilient AI alignment solutions. Some have argued Agent AIs would have advantages compared to Tool AIs, like Gwern arguing Tool AIs would be "less intelligent, efficient, and economically valuable". Lets say we live in a future where more advanced versions of GPT get routinely hooked up to other components like AgentGPT to carry out tasks, something which makes it significantly better at complicated tasks. OpenAI just developed GPT-10 which might be capable of superhuman scientific research. They can immediately hook it up to AgentGPT+ and make trillions of dollars while curing cancer, or they can spend 2 years tweaking it until it can perform superhuman scientific research without agentic components. It seems plausible that OpenAI would take the harder but safer route, but our 2050s AI company very well might not bother. Especially if the researchers, having successfully gotten rid of the ban, view AI alignment people the same way anti-nuclear-power environmentalists and anti-GMO activists are viewed by those respective fields.

Regarding talk of 100-year bans on AI while people steadily work on supposedly safer methods, I'm reminded of how 40 years ago overpopulation was a big mainstream concern among intellectuals. These ideas influenced government policy, most famously China's One Child policy. Today the fertility rate is substantially reduced (though mostly not by the anti-overpopulation activists), the population is predictably aging, and...the plan is completely abandoned, even though that was the entirely predictable result of dropping fertility. Nowadays if a country is concerned with ferility either way it'll want it to increase rather than decrease. Likewise the eugenics movement had ambitions of operating across many generations before being erased by the tides of history. In general, expecting your movement/ideas to retain power that long seems risky seems very risky.

people who do not and who never have existed can't be said to have "wants" in any meaningful sense

You should include people who will exist as well, as opposed to people who could potentially exist if you took other actions but will never actually exist. Otherwise something like "burying a deadly poison that you know will leach into the water table in 120 years" would be perfectly moral, since the people it will kill don't exist yet.

This kind of idiotic one-dimensional thinking is why I maintain that utilitarianism is fundementally stupid, evil, and incompatible with human flourishing.

As I mentioned, Preference Utilitarianism and Average Preference Utilitarianism are also forms of utilitarianism. And Total Utilitarianism doesn't imply wireheading either. Wireheading is only an implication of particularly literal and naive forms of hedonic utilitarianism that not even actual historical hedonic utilitarians would endorse, they would presumably either claim it isn't "real" happiness or switch to another form of utilitarianism.

Honestly, I think the main rhetorical advantage of non-utilitarianism forms of ethics is that they tend to be so incoherent that it is harder to accuse them of endorsing anything in particular. But people being bad at formalizing morality doesn't mean they actually endorse their misformalization's implications. You just tried to express your own non-utilitarian beliefs and immediately endorsed sufficiently-delayed murders of people who aren't born yet, that doesn't mean you actually support that implication. But having non-formalized morality is no advantage in real life and often leads to terrible decisions by people who have never rigorously thought about what they're doing, because you really do have to make choices. In medicine utilitarianism gave us QALYs while non-consequentialism gave us restrictive IRBs that care more about the slightest "injustice" than about saving thousands of lives, as a human who will require medical care I know which of those I prefer.

omnicide

The view he is expressing is of course the opposite of this - that humanity surviving until it ultimately colonizes the galaxy is so important that anything that improves humanity's safety is more important than non-omnicidal dangers. Of course that would still leave a lot of uncertainty about what the safest path is. As I argued, significant delays are not necessarily more safe.

My 1e999999999999999 hypothetical future descendants who see utilitarian AIs as abominations to be purged with holy fire in the name of the God-Emperor are just as real as your "10^46 hypothetical people per century after galactic colonization" and thier preferences are just as valid.

To be clear the "preference" framing is mine, since I prefer preference utilitarianism. Bostrom would frame it as something like trying to maximize the amount of things we value, such as "sentient beings living worthwhile lives".

The point isn't whether such an outcome is particularly likely, it's that it's more likely than being kept barely alive by a series of staggeringly unlikely macroscopic quantum events. The idea behind quantum immortality is that, if many-worlds is true and all the worlds in it are truly "real", there will always be some small subset of worlds where you continue existing so long as this is physically possible. And a lot of things are physically possible if you get into extremely unlikely quantum fluctuations. Since you don't experience the worlds where you are already dead, an increasing percentage of your remaining future selves would have experienced whatever unlikely events are required to keep you alive. When I said "your society" that wasn't meant to refer to any current society, it was meant to refer to the idea of surviving as part of a society at all. As opposed to most of your future copies surviving as the only remaining human in your universe, floating in space after the destruction of Earth and staying alive only because in some tiny fraction of the Everett branches splitting off each instant some oxygen/etc. randomly appears and and keeps you alive. Any future that doesn't require such a continuous series of coincidences will be a much larger fraction of the branches where you survive, and the most obvious such future is one where people deliberately invent the required technology. So whether quantum immortality is true or not, and whether or not you decide to care about the fate of future selves even if they only exist in a small fraction of branches, the expected outcomes of quantum immortality being true aren't the "kept barely alive by randomness" scenarios.

It is precisely the ability to convert between mild experiences and extreme experiences at some ratio that allows everything to add up to something resembling common-sense morality. If you don't, if the ranking of bad experiences from most mild to most severe has one considered infinitely worse than the one that came before, then your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are. You seem to be taking the fact that the risks in these hypotheticals are not worth actual consideration as a point against these hypotheticals, but of course that is the point the hypotheticals are making.

Moral reasoning is not really meant for such extreme numbers

Nothing in the universe will ever be 3↑↑↑3, but 7 billion people is already far beyond intuitive moral reasoning. We still have to make decisions affecting them whether our moral reasoning is meant for it or not. Which includes reacting differently to something bad happening to one person out of millions of beneficiaries than to one person out of hundreds of beneficiaries.

Has anyone ever experienced such nerve damage as a result of a decision they took? Do we know that it's even theoretically possible? I can't imagine that really any amount of carpal tunnel is actually equivalent to many years of deliberate torture, even if 3↑↑↑3 worlds exist and we choose the person who suffers the worst carpal tunnel out of all of them. So I'd probably say that this risk is literally 0, not just arbitrarily small.

In some percentage of cases the cancer spreads to your brain, you get surgery to remove the tumor, and the brain surgeon messes up in precisely the right way. Both "locked-in syndrome" and chronic pain are things that happen, it's hardly a stretch to think a combination of both that paralyzes you for 50 years while you experience continuous agony is physically possible. And of course even if you were uncertain whether it was physically possible, that's just another thing to multiply the improbability by. It's not that rounding the probability down to 0 doesn't make sense in terms of practical decision-making, it's that "1 in 3↑↑↑3" odds are unimaginably less likely, so you should round them down to 0 too.

If you sum up all of the suffering and give it to a single person, IMO the minimal suffering will add up to a lot less than the maximal suffering.

I do not think this is a meaningful statement. We can decide which scenario is preferable and call that something like "net utility" but we can't literally "add up" multiple people's experiences within a single person. It doesn't have a coherent meaning so we are free to arbitrarily imagine whatever we want. That said, to the extent that its meaning can be nailed down at all, I think it would favor avoiding the 3↑↑↑3 option. My understanding is that a single pain receptor firing once is not noticeable. If a form of suffering is instead barely noticeable, it is presumably "bigger" than a single pain receptor firing. There are only 37 trillion cells the the human body, so the number of pain receptors is something smaller than that. So the first step in multiplying barely-noticeable suffering by 3↑↑↑3 is that it goes from "worse than a pain receptor firing" to "worse than every pain receptor firing continuously for an extended period". And that doesn't make a dent in 3↑↑↑3, so we multiply further, such as by making it last unimaginably longer than merely 10^100 times the lifespan of the universe.

That is a pretty arbitrary and meaningless matter of interpretation though. A more meaningful measure would be the Rawlsian veil of ignorance, You're a random member of a population of 3↑↑↑3, is it better for you that 10^100 of them be tortured or all of them experience a dropped frame in a video? This is equivalent to what I answered in my previous post, that it would be foolish to sacrifice anything to avoid such odds.

Of course the sentence could have been clearer. It's sloppy conversational English relying on the reader to fill in part of the sentence which accidentally ended up having a more straightforward meaning that the writer did not intend, something akin to a garden-path sentence. If there was no context your interpretation would have been the more intuitive one. But there is context, and it's very unlikely that a Twitter employee would claim the rioters were all false flaggers rather than Trump voters, or argue it that particular way if he did. And I think that not only does my reading of it match what he meant, it matches how the other Twitter employees in the conversation interpreted it, how the reporters posting the conversation interpreted it, and how the people responding to you in this thread are interpreting it. So while it's a bit interesting that your reading of it is also possible based on the text it doesn't seem particularly significant.

There are immigration policies other than "white ethnostate" and "open borders". Mass immigration sufficient for your concern to happen would presumably come from countries that suck to live in, and countries that suck to live in rarely have many high-quality immigrants. Even under the current U.S. immigration system, demographic replacement has little to do with the small numbers of highly-selected immigrants, it's the reproduction rates of the population groups already in the U.S. and the ways for low-quality immigrants to bypass that selective system.

I believe the past few years have demonstrated he was more than half right.

Yes, I remembered that passage because it seemed prophetic. But of course both denying citizenship based on race and his later discussion of the black-white intelligence gap are now outside the mainstream overton window, something to be cited as proof of generic racism and justification for tearing down statues but not actually engaged with. Including by those who simultaneously find it obvious that Israel can't give palestinians citizenship. The point is that resorting to the "obvious" lets incongruous views pass by completely unexamined. The intent of anti-zionists in comparing Israel to other ethno-nationalist projects is that Israel should be opposed, but other outcomes of taking that idea seriously would include becoming more sympathetic to ethno-nationalism in general or thinking more rigorously about what you think separates Israel from the others. It's not that those views can't be reconciled, it's that people should have to at least realize they're doing so. And perhaps become more understanding of the views that they currently view as cartoon villainy, whether those views are "racism" or the people who think there is a moral mandate for Israel to give up on being a jewish state and give citizenship to the palestinians in the hope that this will result in living together in peace.

Wouldn’t the rarity of the catastrophic failure matter as well?

Which is why you do enough math to sanity-check the comparison. As I mentioned, Fukushima released more radioactivity than would be released by burning all the coal deposits on Earth. Nuclear power plants involve relevant amounts of radioactivity, coal plants don't. The fact that a release like Fukushima happened even once implies the odds aren't low enough to overcome the massive difference in radioactivity. Nuclear has plenty of advantages, and the risk of catastrophic failure is low enough that those other advantages might easily outweigh it, but being less of a radiation risk than coal is not one of them.