@sodiummuffin's banner p

sodiummuffin


				

				

				
2 followers   follows 0 users  
joined 2022 September 05 03:26:09 UTC

				

User ID: 420

sodiummuffin


				
				
				

				
2 followers   follows 0 users   joined 2022 September 05 03:26:09 UTC

					

No bio...


					

User ID: 420

White nationalism doesn't just mean "pro-white", it is generally defined by its advocates as including a desire for the existence of white ethnostates. It's like conflating "cares about jewish people" and "zionist": many jews believe zionism harms jewish people instead of helping them (and doing it with white nationalism is even less accurate because zionism is currently more mainstream).

It's not just a matter of prioritization but of beliefs about the world. There are plenty of normal people who genuinely think that racial diversity benefits everyone, including white people. Furthermore, even within the realm of people who both know about HBD and think it potentially justifies government discrimination on the basis of race, most are not white nationalists. For instance white nationalists have termed Emil Kirkegaard an "IQ nationalist", though in the linked post he ends up concluding that explicit IQ nationalism would just amount to much the same thing as skilled worker laws, and the important thing is keeping out the far-below-average immigrants without IQ tests or racial discrimination being nessesary. Even if you go to a more populist community like /pol/, there are both white nationalists who think each race should get its own ethnostates, but also plenty of people who only have an issue with specific races like black people and don't care about racial separation otherwise. If your definition of "white nationalist" includes people who want to ban black immigration but allow mass-migration from Hong Kong, on the basis that they believe that such immigration would benefit everyone in the destination country including white people, it's not going to be very recognizable to conventional white nationalists.

Sorry, I was going off half-remembered information about how "grass-fed" labeling is meaningless in some countries. A more relevant point is that grass-fed labeling includes food sources like hay, which still have to be harvested, which brings us back to the inherent thermodynamic inefficiency of feeding another animal so you can later eat its meat.

And clearing jungle for pasture is a net improvement for animal welfare, because jungles are obscene murder temples of pure agony, while well-tended pastures are grass and flowers and a few voles (if you don't care about insects).

I was responding based on his assumptions that areas like cropland are bad for animals, rather than being good because they involve creating areas where fewer animals are born into lives of suffering. Yes, with the right set of moral assumptions you can view every animal born into the wild as a bad thing, which would be a point in favor of anything that involves using lots of land in a way that leads to a low density of animal life. But once you're considering things at that level of indirect effects, you should also consider that using resources and land to raise cattle trades off against using it in other ways. Strip-mines and suburbs don't have a high density of animals either, even tree farms aren't that high, it's difficult to predict the effects on land use if people redirected money from meat to something like housing.

In the sufficiently long term the biggest effect might be on social attitudes, as humans gain more and more power over the environment a society in which ethical vegetarianism is the norm also seems more likely to care about wild animal suffering and act accordingly. (Like those ideas regarding genetically-engineering wild animals to reduce their suffering.) If nothing else wild animals with brains capable of suffering are already becoming a smaller percentage of Earth's population, so the average welfare of animals (including humans in the average) is increasingly driven by whether humanity continues to scale up the population of animals we raise for slaughter alongside our own population. For instance look at Earth's distribution of mammal and bird biomass - obviously neither mammals or biomass are the metrics we care about, but it gives a sense of the trend.

I addressed this in the footnote.

But it's not true that "for the energy generated, more radiation is given out by fly ash". You didn't say "so long as nothing goes wrong", so the average amount of radiation released per energy produced includes the risk of disaster. And since nuclear power plants involve significantly radioactive material and coal plants don't, even a tiny risk is enough to push the average way above coal plants. The fact that Fukushima alone released more radioactivity than the fly ash we would get from burning all coal deposits on Earth makes this clear.

It is a quite common myth that living near a nuclear power plant emits radiation during ongoing operations.

Then just say "nuclear power plants release virtually no radiation under normal operation". Don't try to make it sound like nuclear beats coal in terms of radiation, on a technicality sufficiently narrow that both you and the Scientific American article you link (and the people I've seen bring up this talking point before) stumble into outright falsehood. Nuclear beats coal on plenty of metrics, there is no need to compare them in terms of radioactivity besides the appeal of being counterintuitive.

Not a reaction of someone who is not even slightly worried.

Sure it is. Yudkowsky is exactly the sort of person who would be outraged at the idea of someone sharing what that person claims is a basilisk, regardless of whether he thinks the specific argument makes any sense. He is also exactly the sort of person who would approach internet moderation with hyper-abstract ideas like "anything which claims to be a basilisk should be censored like one" rather than in terms of PR.

Speaking or writing in a way where it's difficult to use your statements to smear you even after combing through decades of remarks is hard. It's why politicians use every question as a jumping off point to launch into prepared talking-points. Part of Yudkowsky's appeal is that he's a very talented writer who doesn't tend to do that, instead you get the weirdness of his actual thought-processes. When presented with Roko's dumb argument his thoughts were about "correct procedure to handle things claiming to be basilisks", rather than "since the argument claims it should be censored, censoring it could be used to argue I believe it, so I should focus on presenting minimum attack-surface against someone trying to smear me that way".

https://archive.is/nM0yJ

Again, I deleted that post not because I had decided that this thing probably presented a real hazard, but because I was afraid some unknown variant of it might, and because it seemed to me like the obvious General Procedure For Handling Things That Might Be Infohazards said you shouldn't post them to the Internet. If you look at the original SF story where the term "basilisk" was coined, it's about a mind-erasing image and the.... trolls, I guess, though the story predates modern trolling, who go around spraypainting the Basilisk on walls, using computer guidance so they don't know themselves what the Basilisk looks like, in hopes the Basilisk will erase some innocent mind, for the lulz. These people are the villains of the story. The good guys, of course, try to erase the Basilisk from the walls. Painting Basilisks on walls is a crap thing to do. Since there was no upside to being exposed to Roko's Basilisk, its probability of being true was irrelevant. And Roko himself had thought this was a thing that might actually work. So I yelled at Roko for violating basic sanity about infohazards for stupid reasons, and then deleted the post. He, by his own lights, had violated the obvious code for the ethical handling of infohazards, conditional on such things existing, and I was indignant about this.

Your argument is backwards, most of the probability mass with conscious humans will be in those world's where immortality is nice and easy, but I know which world I live in now.

The chance of quantum fluctuations repeatedly keeping you barely alive through random chance is incredibly unlikely, far more unlikely than them resulting in a world where someone develops the necessary technology faster than you think is plausible. In his scenario you're lying "with third degree burns buried in a garbage dump", that means we need absurd quantum events happening continuously for years to prevent you dying of shock, infection, suffocation, starvation, etc. Each unlikely event multiplies the improbability further. Even under the logic of quantum immortality, this only matters if they're the only branches where you survive. Far more probable is that, for instance, quantum fluctuations in some neurons results in someone trying the right ideas to develop an AI that can do superhuman medical research or develop brain-uploading. Indeed, even if it was somehow truly unreachable through normal research, I think it would be more likely that fluctuations in a computer's RAM result in file corruption that happens to correspond to a functioning file containing correct information on the required technology. Because at least that only really has to happen once, rather than happening again and again in the conventional form of quantum immortality. Eventually the sun is going to expand into a red giant and similarly worlds where you survive through your society developing space-travel are going to dominate worlds where you survive being inside the sun through unlikely quantum events happening many times per second.

Also, more importantly I don't see why if by the Born rule I end up in a world where I am dead, I won't just be dead. There is nothing in physics that says that option is off limits; though, of course, other copies would still exist in agony.

The premise of quantum immortality is that if 1+ copies of you still exist, then you are still alive even if you no longer exist in the vast majority of worlds. If many-worlds is true and corresponds to worlds that are all "real", then there will virtually always be surviving copies. You don't "end up" in any individual world, all the copies diverging from your current self which haven't been destroyed (or altered in ways you consider incompatible with being yourself) are you.

It's not necessary to the argument but I would argue that under a sensible definition some of the copies that have already diverged are you as well. People don't consider it death when they get drunk and don't retain hours of memories. This isn't too relevant now but it's potentially relevant to a future self on the verge of death, since under that definition most of your selves that survive are ones that already diverged, rather than more obvious but unlikely quantum immortality scenarios like "in some worlds your brain is preserved in a freak accident and then used to reconstruct your mind centuries later". But ultimately these definitions are an arbitrary decision, humans intuitions regarding wanting to live aren't well-equipped to deal with multiple future selves in the first place, whether due to many-worlds or something like multiple software copies. However under many-worlds you can't just go with the "my current brain is me and copies aren't" option, because all your future selves are copies diverging from your current self.

See my post here. And note the document is from a completely different photo shoot.

The U.S. produces 51.5 million acres of hay and 37.3 million acres of wheat per year. So setting aside all other forms of animal feed, more land goes to producing hay alone than to wheat.

However, I think that Zeke was referring to small mammals getting killed during harvesting, which my googling suggests is more due to increased predation from loss of cover than getting chewed up by machinery.

Which is why I'm pointing out that raising cattle at scale involves harvesting even more land. Estimating the effects on animals from cropland is difficult, but it's not a comparison that favors beef to begin with.

Industrial farming of animals requires feeding them, and thanks to thermodynamics this is dramatically less efficient than growing food for humans directly. (Theoretically you can raise some grass-fed cattle on grassland that already exists without clearing new land but this does not scale and still kills the cattle themselves. Note that labeling beef as "grass-fed" does not mean they get their food exclusively from pasture, it includes feeding them hay which itself has to be harvested.) You don't need to throw up your hands and act like there's no way to know if there's more animal death/suffering required for beef or bread, various rough estimates like this are enough to show the intuitively obvious answer is correct.

had enough reach that those victims received threats from other people

But the threats don't have to have anything to do with the reach. Everything Alex Jones said about Sandy Hook came long after it was mainstream in conspiracy theory circles. The idea that those involved were actors didn't catch on because of Jones. It caught on because of that one Robbie Parker video from the day after the shooting where he comes across as suspicious, which has been incessently posted on /pol/ since it came out and supplemented with various additional coincidences. 4plebs doesn't go all the way back to the shooting but you can see the discussion back in December 2013:

http://archive.4plebs.org/pol/search/text/robbie%20parker/order/asc/page/1/

Meanwhile Alex Jones was claiming that the shooting happened but was a false-flag orchestated by the government. If we go by this Media Matters page he didn't start parroting the "actors" thing until 2014. It's hard to judge since obviously I obviously don't watch him myself but it doesn't even look like he talked about it much, the grassroots conspiracy-theorist interest was much greater. But nonetheless the legal system blames him for other conspiracy theorists because they share the same beliefs, without proving that he caused those beliefs.

But zeke5123 is talking about accidentally killing animals as part of growing and harvesting crops, not optimal land use. That seems like it would be similar per-acre whether you're growing alfalfa or wheat.

It's a completely different subject but I'm reminded of Scott's 2015 post about California's water crisis:

https://slatestarcodex.com/2015/05/11/california-water-you-doing/

34 million acre-feet of water are diverted to agriculture. The most water-expensive crop is alfalfa, which requires 5.3 million acre-feet a year. If you’re asking “Who the heck eats 5.3 million acre-feet of alfalfa?” the answer is “cows”. A bunch of other crops use about 2 million acre-feet each.

All urban water consumption totals 9 million acre-feet. Of those, 2.4 million are for commercial and industrial institutions, 3.8 million are for lawns, and 2.8 million are personal water use by average citizens in their houses.

Which leads to interesting calculations like this:

The California alfalfa industry makes a total of $860 million worth of alfalfa hay per year. So if you calculate it out, a California resident who wants to spend her fair share of money to solve the water crisis without worrying about cutting back could do it by paying the alfalfa industry $2 to not grow $2 worth of alfalfa, thus saving as much water as if she very carefully rationed her own use.

But in any case the question of whether alfalfa is worth the resource usage has little to do with zeke5123's objection.

None of that addresses that raising meat for slaughter involves growing more crops, not less. For instance, the U.S. produces 51.5 million acres of hay and 37.3 million acres of wheat per year. Even before trying to account for other sources of animal feed, or that people eat more wheat than beef, or that some of that wheat is itself feeding animals, hay alone is using more land that wheat production.

These weren't no-names or non-scientists but they were seriously and embarrassingly wrong. Imagine if we actually listened to these people, speedily cut fossil fuels out of the world economy accepting the energy rationing, economic mobilization and famines that would likely happen... only for it to be a nothingburger.

No they weren't, The Guardian just made that up. It's not a prediction, it's a brief outline of a hypothetical written by two non-scientists (both self-professed futurists working for the consulting firm Global Business Network) who specifically state that it is extreme and unlikely. The point is not that they think it is likely to happen, but that they think such unlikely but extreme scenarios should be considered and prepared for by the Pentagon.

An Abrupt Climate Change Scenario and Its Implications for United States National Security

We have created a climate change scenario that although not the most likely, is plausible, and would challenge United States national security in ways that should be considered immediately.

These are the steps they propose be taken:

  • Improve predictive climate models to allow investigation of a wider range of scenarios and to anticipate how and where changes could occur
  • Assemble comprehensive predictive models of the potential impacts of abrupt climate change to improve projections of how climate could influence food, water, and energy
  • Create vulnerability metrics to anticipate which countries are most vulnerable to climate change and therefore, could contribute materially to an increasingly disorderly and potentially violent world.
  • Identify no-regrets strategies such as enhancing capabilities for water management
  • Rehearse adaptive responses
  • Explore local implications
  • Explore geo-engineering options that control the climate.

Notice that reducing CO2 emisssions isn't even mentioned because their scenario is so abrupt that it would be too late, rather they are talking about preparing ways to mitigate the damage and/or do emergency geo-engineering, in case an unlikely scenario like that happens.

This report suggests that, because of the potentially dire consequences, the risk of abrupt climate change, although uncertain and quite possibly small, should be elevated beyond a scientific debate to a U.S. national security concern.

  1. Yes, in comparison to established democracies they seem less stable and unlikely to survive as long.

  2. Like most authoritarian governments, they pay the cost to the functioning of the country I mentioned, because they are less responsive to feedback and have to keep things under control in other ways. What democratic countries would actually prefer to live under a government like Saudi Arabia in exchange for some supposed economic benefit from open borders?

  3. Remember we are talking not just about formal democracy but a "share in governance", in particular in the context of open borders. Non-democracies can still do things to keep the support of the majority of residents, both by controlling who enters (and how long they stay) and by being responsive to the desires of residents. But he was talking about a country that both let in anyone and then disregarded their opinions in favor of democratic rule by the minority of natives.

initially lie to her in at least one way about it

I don't see how that is shown by the email in question.

having at least one other affair at approximately the same time

The email talks about feelings rather than actions, so this may depend on whether we're including "emotional cheating". I'm not saying that multiple extramarital sexual relationships are an implausible interpretation, but it's not completely definitive. More to the point:

having an affair

Note that, while in the email he says "affair", whether he was actually having an affair may depend on the definition you are using. She claims that he falsely claimed his wife was fine with it. If that arrangement was instead actually real, having extramarital sex with his wife's permission would not fit the definition of affair typically used by "polyamorous" people, even if Singer himself used the word. I am not very inclined to think polyamory is a good idea, not least because it leads to more relationship drama like this, but I do think it makes a difference ethically if he had permission. And it doesn't seem terribly implausible for a philosopher and his wife to be the sort of people to think open relationships are a good idea in 2002.

There are immigration policies other than "white ethnostate" and "open borders". Mass immigration sufficient for your concern to happen would presumably come from countries that suck to live in, and countries that suck to live in rarely have many high-quality immigrants. Even under the current U.S. immigration system, demographic replacement has little to do with the small numbers of highly-selected immigrants, it's the reproduction rates of the population groups already in the U.S. and the ways for low-quality immigrants to bypass that selective system.

Wouldn’t the rarity of the catastrophic failure matter as well?

Which is why you do enough math to sanity-check the comparison. As I mentioned, Fukushima released more radioactivity than would be released by burning all the coal deposits on Earth. Nuclear power plants involve relevant amounts of radioactivity, coal plants don't. The fact that a release like Fukushima happened even once implies the odds aren't low enough to overcome the massive difference in radioactivity. Nuclear has plenty of advantages, and the risk of catastrophic failure is low enough that those other advantages might easily outweigh it, but being less of a radiation risk than coal is not one of them.

The proponents were saying 'let's get rid of Saddam it'll be easy and stabilize the Middle East, spread democracy, make new allies...'.

Helping Iraqis and the Middle East doesn't significantly materially strengthen the U.S., it's expending U.S. resources and power for the sake of charity. This is inherently self-limiting, the U.S. has resources to waste on things like this but in the end it is left with less capability to wage war than it started with. Having Iraq as an ally or vassal was never going to be valuable enough to be worth a war, even if it was as easy as proponents thought it would be, and proponents of the war instead justified the war in terms of humanitarian (Saddam, democracy) or threat-reduction (WMDs) concerns. And the U.S. didn't even really turn Iraq into a vassal, it's a democracy that has been at times vocally critical of the U.S. and there is no guarantee that U.S./Iraq relations won't worsen further in the future. It would have been far easier to turn it into an ally in some other way, like buddying up to Saddam or replacing him with some other dictator. Proponents of the Iraq war didn't say they would turn Iraq into a vassal, they said they would turn it into a democracy, and that is indeed what they did. It was the opponents of the Iraq war who said the U.S. would materially benefit, the "No blood for oil" people, but that was never remotely realistic and the proponents didn't say it was.

That's a fairer defense, and the Ontario Pension Plan admins should be facing serious scrutiny, if not potential review of their licensing (if they have any), as should any who make serious crypto investment with other people's money and no extremely clear disclosure.

Should they though? I think the standard should be higher for institutional investors than for charities accepting donations, but that doesn't necessarily mean the standard for investors should be significantly higher than it already is. They're an easy target because they're partially sponsored by the government, but they were just doing the same thing that the entirely private investors were doing. And the private investors have an appetite for risk because ones that were too risk averse would get outcompeted and replaced in their roles by ones that pursued a more successful strategy. Sequoia Capital is a 50-year-old firm managing $85 billion, and while you could speculate that their employees have recently become less competent or too reckless it seems perfectly plausible that their decision-making here was just the same kind of decision-making that led to these investments:

Notable successful investments by Sequoia Capital include Apple, Cisco, Google, Instagram, LinkedIn, PayPal, Reddit, Tumblr, WhatsApp, and Zoom.

Meanwhile, charities accepting donations both have less to lose, since rather than outright losing an initial investment there's just any money/time wasted by planning around future funding that doesn't come and vague reputational concerns potentially affecting future donations, and more to gain, since you're outright getting money for nothing rather than trying to get a return for money you already have. There's a direct tradeoff between the two, if it's 35% of your funding you risk having wasted more money if it evaporates, while Sequoia obviously doesn't invest that much in a single company - but if you refuse you know you're out a whole 35% of your potential funding, whereas Sequoia can just invest their money in something else. If it's 100% of your funding because you've been soliciting funding for your new charity and they're the first donors to say yes, there's certainly a risk the money will dry up and destroy your charity if you can't find a substitute, but if you refuse there's a risk you won't find enough donations to begin with. You talk about it killing charities, but if a sudden loss of funding can do that how much is because of "less funds than expected" vs. just "less funds, same as if you had refused"?

An EA organization has to... look at other funders? Which, according to Scott's claims here, were already thirsty for good causes to give money to?

The reason why there was more funding than EA charities knew what to do with in the short term was because of FTX suddenly showing up and throwing around a bunch of money, if everyone had refused that wouldn't have been the case. If those other donors don't materialize for the current funding crunch would they have done so to begin with?

It seems like the tradeoffs here pretty strongly favor not being particularly picky about who you accept donations from. Sure if you know someone obtained money from criminality you don't accept the money, but if a dozen institutional investors and the police/SEC don't have a problem then why should you? Now, you could try to mitigate risk in other ways than refusing money outright, like saving more of the money rather than finding ways to spend it immediately, or better yet persuading them to give you a larger endowment rendering you more self-sufficient. But obviously this might not be possible and carries significant disadvantages, for one donors (especially EA donors) want to see actual results from their donations and evaluate your performance, not "we'll do some charity with this money someday". It transfers the risk of the donor having problems to a risk of the charity having problems, like becoming The Wikimedia Foundation with an enormous pile of cash and a huge stream of donations coming in while meanwhile only a tiny fraction gets spent on anything of value. That is after all one of the big problems EA sought to address, and unlike an incompetent/fraudulent for-profit company which eventually collapses to remove the problem, an incompetent/fraudulent charity can continue to waste people's donations indefinitely. I'm not saying that no improvement is possible, for instance maybe there are measures to be more resilient in case funding is lost, but I don't think it justifies extremely costly measures like outright refusing funding because the donor is in a risky field, and I don't think it reflects some deep problem with EA.

Your link is mangled and goes to a random /r/funny thread, seemingly due to some combination of trying to link an image hosted on Reddit and old.reddit.com.

Interesting. I thought it might correlate with being a lower-trust society and surveys like these, especially because of the stereotype of Russians being vocally cynical, but maybe not. Though I probably shouldn't conclude anything from non-randomized social media polls.

Even the real surveys are dubious (different countries probably radically differ in how they interpret the question, especially when it's being translated) and looking at the link above Russia isn't as low on them as I thought. For instance 23.3% of surveyed Russians agreed with "most people can be trusted", which is lower than the U.S. (39.7%) or Sweden (63.8%) but slightly higher than France (18.7%) or Spain (19%), let alone Brazil (6.5%) or Zimbabwe (2.1%). It's hard to tell how meaningful any of this is.

Okay, but most people want to classify the guy who wakes up tomorrow with their memory and personality as being themselves. (Or rather a sufficiently similar memory and personality, since those change over time.) If many-worlds is true and the worlds literally exist, then each instant you're splitting into countless copies, all of whom have your memory/personality/continuity-of-consciousness. Under your interpretation none of them are the same person they were, so nobody is the same person from moment to moment. Which doesn't seem like a terribly useful definition of selfhood.

Also, real life is nowhere near as clean as these hypotheticals, and focusing more on safety has many negative knock-on effects elsewhere.

Sure, that's the cost of using real-life comparisons, but do you really think that's the only thing making some of those tradeoffs worthwhile? That in a situation where it didn't also affect economic growth and immortality research and so on, it would be immoral to accept trades between even miniscule risks of horrific consequences and very small dispersed benefits? We make such tradeoffs constantly and I don't think they need such secondary consequences to justify them. Say someone is writing a novel and thinks of a very slightly better word choice, but editing in the word would require typing 5 more letters, slightly increasing his risk of developing carpal-tunnel, which increases his risk of needing surgery, which increases his risk of the surgeon inflicting accidental nerve damage that inflicts incredibly bad chronic pain the rest of his life equivalent to being continuously tortured. Yes, in real life this would be dominated by other effects like "the author being annoyed at not using the optimal word" or "the author wasting his time thinking about it" - but I don't think that's what is necessary to make it a reasonable choice. I think it's perfectly reasonable to say that on its own very slightly benefiting your thousands of readers outweighs sufficiently small risks, even if the worst-case scenario for the edit is much worse than the worst-case scenario for not editing. And by extension, if you replicated this scenario enough times with enough sets of authors and readers, then long before you got to 3↑↑↑3 readers enough authors would have made this tradeoff that some of them would really have that scenario happen.

While the number 3↑↑↑3 is obviously completely irrelevant to real-life events in our universe, the underlying point about scale insensitivity and tradeoffs between mild and severe events is not. Yudkowsky just picked a particularly extreme example, perhaps because he thought it would better focus on the underlying idea rather than an example where the specifics are more debatable. But of course "unlikely incident causes people to flip out and implement safety measures that do more damage than they solve" is a classic of public policy. We will never live in a society of 3↑↑↑3 people, but we do live in a society of billions while having mentalities that react to individual publicized incidents much like if we lived in societies of hundreds. And the thing about thinking "I'd never make tradeoffs like that!" is that they are sufficiently unavoidable in public policy that this just means you'll arbitrarily decide some of them don't count. E.g. if the FDA sincerely decided that "even a single death from regulatory negligence is too much!", probably that would really mean that they would stop approving novel foods and drugs entirely and decide that anyone who died from their lack wasn't their responsibility. (And that mild effects, like people not getting to eat slightly nicer foods, were doubly not their responsibility.)

Many people enjoying a game is (imo) much more significant than many people getting dust specks, while a few people getting skin cancer is much less significant than one person getting tortured for 50 years.

But it isn't nullifying their enjoyment of the game, it's a slight barely-noticeable flicker in the broadcast. (If you want something even smaller, I suppose a single dropped frame would be even smaller than a flicker but still barely noticeable to some people.) If you're making media for millions of people I think it's perfectly reasonable to care about even small barely-noticeable imperfections. And while the primary cost of this is the small amount of effort to notice and fix the problem, this also includes taking minuscule risks of horrific costs. And it isn't a few people getting skin cancer, it's the fraction of the people who get skin cancer that then have something go wrong with surgery such that they suffer torture. I just said torture during the surgery, but of course if you multiply the number of planets enough you would eventually get high odds of at least one planet's broadcast operator suffering something like the aforementioned ultra-severe chronic pain for a more direct comparison.

Genuinely, even going so far as to write out a company policy for that ridiculous scenario (where 3^^^3 people risk skin cancer) would mean asking all of your employees to familiarize themselves with it, which would mean wasting many lifetimes just to save one lifetime from skin cancer.

Feel free to modify it to "making a design tradeoff that either causes a single dropped frame in the broadcast or a millisecond of more-than-optimal sunlight on the broadcast operator", so that it doesn't consume the operator's time. I just chose something that was easily comparable between a single operator making the choice and making the choice for so many operators that the incredibly unlikely risk actually happens.

Would you choose Maximal Suffering above Maximally Miniscule Suffering?

Sure. Same way that if I had a personal choice between "10^100 out of 3↑↑↑3 odds of suffering the fate you describe" and "100% chance of having a single additional dropped frame in the next video I watch" (and neither the time spent thinking about the question nor uncertainty about the scenario and whether I'm correctly interpreting the math factored into the decision), I would choose to avoid the dropped frame. I'm not even one of the people who finds dropped frames noticeable unless it's very bad, but I figure it has some slight but not-absurdly-unlikely chance of having a noticeable impact on my enjoyment, very much unlike the alternative. Obviously neither number is intuitively understandable to humans but "10^100 out of 3↑↑↑3" is a lot closer to "0" than to "1 out of the highest number I can intuitively understand".

This is just "The Pyramid and the Garden". People aren't good at properly adjusting for the level of cherrypicking and degrees of freedom possible when you have thousands of people scouring a large world for evidence matching their pet theory.

A photoshoot for a fashion company reuses a "legal documents" prop from the shooting of a television drama as "office documents", the same company sells fashion that is vaguely leather-daddy inspired and didn't segregate it from photoshoots with children, and you conclude that "we are ruled by satanic pedophiles". (And they are deliberately embedding evidence about this in fashion photoshoots for some reason.) If you lived in a tribe of a few dozen people and happened to personally notice two coincidences like that about a single person, maybe that would be reason to be suspicious. But you don't, you live in a society of hundreds of millions where thousands of people spend time hunting down and broadcasting stuff like this for your perusal. As a result this doesn't even really tell us about Balenciaga's marketing department, let alone "society". But people's brains don't adjust like that, so give them a few coincidences like this and they'll either come to believe false things or dismiss it out of hand as a conspiracy theory. And then the ones who do the latter are still vulnerable to the same mistakes in reasoning when packaged in ways that don't register as "conspiracy theory", especially ones spread by mainstream media sources.

Missing a not, anyway this is just a made up principle. Annexation is beyond the pale but we can bomb various countries into anarchy, set up puppet governments in them, divide countries into smaller parts, place troops in countries without their permission?

Yes. Wars of annexation materially strengthen aggressors and incentivize further war, they are a sort of geopolitical positive feedback loop. In the modern era going to war makes you weaker and poorer, less capable of waging war rather than more. Sometimes countries are willing to do it anyway, and of course there is gaming of the boundaries, but keeping the feedback loop negative rather than positive helps stop this getting too out of hand. How harmful (or beneficial) the war is to the country being invaded isn't really relevant to that, the important thing is that it be harmful to the aggressor. For instance the invasion of Iraq imposed a cost rather than a benefit on the U.S. (as even most of its proponents knew it would) so it didn't result in a series of more U.S. invasions, but the Russian invasion of Crimea was sufficiently beneficial that it chain-reacted into the invasion of the rest of Ukraine.

Wars must have no winners, only losers, and to ensure this continues to remain the case countries are willing to take losses themselves so that attempted wars of annexation leave the aggressor indisputably worse-off. Complaining that countries are "irrationally" willing to harm themselves for the sake of deterrence is deeply silly, it's basic game-theory and central to the logic of modern war. If Russia thought countries wouldn't really be willing to harm themselves for no benefit besides vague principles of game-theoretic value, that's just another way that Russia's status as a low-trust society has made them weaker.

Scientific American: Coal Ash Is More Radioactive Than Nuclear Waste

the study i linked found that for the energy generated, more radiation is given out by fly ash, which contains trace amounts of uranium and thorium. while the amount of radiation that makes it into people from both of these sources isn't dangerous, it's worth pointing out when given the concerns of "gonna be irradiated."

The title of that article is laughably false. The underlying point it is based on, that under normal operation a nuclear plant releases less radioactive material into the environment than a coal plant, is technically true but grossly misleading. Under normal operation nuclear plants release essentially no radioactive material, the radioactivity concern is concentrated purely into the possibility of something going wrong. Sanity-check I did after encountering this argument a decade ago:

The EPA gives the radioactivity of average fly ash as 5.8 picocuries per gram, and the U.S. produces around 125 million tons of coal-combustion byproducts per year as of 2006. If we overestimate and assume all coal-combustion byproducts are the more-radioactive fly ash, that comes to around 658 curies worth of material per year. By comparison, a year after the Fukushima disaster TEPCO estimated total radiation releases as 538,100 terabecquerels - equivalent to 14,543,243 curies. Note that this assumes all fly ash is being released into the environment when modern first-world plants safely capture most of it. So one year after the Fukushima disaster it had already released more radiation than 22,000 years of 2006-era U.S. coal radiation emissions, under very pessimistic assumptions. Which means we can confidently estimate Fukushima has released far more radiation than all the coal burned in human history and all the coal remaining in the ground that could be burned combined.

This doesn't mean that nuclear power is overall a bad idea, but it's definitely not because coal is a worse radioactivity concern. From what I've heard this particular misleading talking point has been going around even before it started circulating on the internet, I remember someone telling me that it was going around Stanford decades ago. People should be cautious with counterintuitive factoids like this, because often they spread because they are too good to check.