@sodiummuffin's banner p

sodiummuffin


				

				

				
2 followers   follows 0 users  
joined 2022 September 05 03:26:09 UTC

				

User ID: 420

sodiummuffin


				
				
				

				
2 followers   follows 0 users   joined 2022 September 05 03:26:09 UTC

					

No bio...


					

User ID: 420

Red requires 100% cooperation for the optimal outcome, blue requires 50% cooperation for the optimal outcome. It is near-impossible to get 100% cooperation for anything, particularly something where defecting is as simple as pressing a different button and has an actual argument for doing so. Meanwhile getting 50% cooperation is pretty easy. If blue required 90% or something it would probably make more sense to cut our losses and aim for minimizing the number of blue, but at 50% it's easy enough to make it worthwhile to aim for 0 deaths via blue majority.

If we are to compare to politics, I think the obvious comparison is to utopian projects like complete pacifism that only work if you either have 100% cooperation (in which case there is no violence to defend against or deter) or if you have so little cooperation that everyone else successfully coordinates to keep the violence-using status-quo (akin to voting for red but blue getting the majority). Except that such projects at least have the theoretical advantage of being better if they got 100% cooperation, whereas 100% cooperation on red is exactly the same as 50%-100% cooperation on blue.

In real life serious crime is almost always a self-destructive act, and yet people do it anyway. "Just create a society where there's no incentive to do crime and we can abolish the police because 0 people will be criminals" doesn't work, not just because you can't create such a society, but because some people would be criminals even if there was no possible net benefit. We can manage high cooperation, which is why we can coordinate to do things like have a justice system, but we can't manage 100% cooperation, that's why we need a justice system instead of everyone just choosing to not be criminals.

It might help to separate out the coordination problem from the self-preservation and "what blue voters deserve" aspects. Let us imagine an alternative version where, if blue gets below 50% of the vote, 1 random person dies for each blue vote. Majority blue is once again the obvious target to aim for so that nobody dies, though ironically it might be somewhat harder to coordinate around since it seems less obviously altruistic. Does your answer here differ from the original question? The thing is, even if you think this version favors blue more because the victims are less deserving of death, so long as you place above-zero value on the lives of blue voters in the first question the most achievable way to get the optimal outcome is still 50% blue.

At the least it's "let's end the nation of Israel and physically remove the Jews to somewhere else", at the most it's ordinary universal anti-Semitism that someone is playing search-and-replace games with.

The mainstream western anti-zionist position is that jews would not be removed. The most popular anti-zionist position is a one-state solution where Palestinians get full citizenship in Israel, often alongside Palestinian right-of-return. Now, zionists would argue that such an outcome would cause problems such as a group like Hamas being elected as the government of Israel and ethnically cleansing jewish people, or at least committing terrorist attacks once they are all Israeli citizens with freedom of movement. But the standard anti-zionist position is that this wouldn't happen, that palestinians are resorting to violent resistance against oppression and would no longer need to do so once they are no longer oppressed. The standard comparison is to South Africa, where terrorist leaders such as Nelson Mandela became the new government but didn't outright ethnically cleanse white people. (The South African government discriminates against white people through heavy affirmative action, is now failing to keep reliable electricity and clean water going, has the 3rd highest murder rate in the world, and sometimes has the leaders of political parties talk about mass-murdering white people. But they haven't actually done it and many anti-zionists would be unaware of these things anyway.)

I think this is an important distinction because otherwise you don't appreciate the extent to which anti-zionism is an extension of standard anti-racist positions. They believe Israel would do fine even if it was majority palestinians just like they they believe majority-white countries would be fine if they opened the floodgates for arabic/african/etc. immigration. They believe ethnic conflicts generally have a good weak side (the oppressed) and a bad powerful side (the oppressor). They believe violence by an oppressed group is ultimately the result of their oppression, like how "riots are the language of the unheard" and thus the BLM riots indicated how badly african-americans are being mistreated by the police. Even if they got their one-state solution and there was continued conflict, they would advocate not for ethnically cleansing jews to make a more homogeneous state but for affirmative-action policies and reparations favoring non-jews until they are no longer oppressed (which would at minimum require they have equal outcomes to jewish Israelis).

Instead, he started messing around with the service itself

No he didn't.

By then, Musk’s sympathies appeared to be manifesting on the battlefield. One day, Ukrainian forces advancing into contested areas in the south found themselves suddenly unable to communicate. “We were very close to the front line,” Mykola, the signal-corps soldier, told me. “We crossed this border and the Starlink stopped working.”

They are geofenced to not work in Russian-controlled areas so that Russia can't use them. Starlink continually updates this to match the situation on the ground, presumably with some allowance for contested areas. Occasionally Ukrainian advances have outpaced Starlink employees knowing about the situation and updating the geofence, particularly during the period being referred to when they made rapid advances. "Appeared to be" is the giveaway to be maximally skeptical even if you don't already know about the incident in question. "The media very rarely lies" but "appeared to be" here functions as journalist-speak for reporting Twitter rumors without bothering to mention whether those rumors were true. The New Yorker doesn't feel the need to verify the factual accuracy of the claim because he's not saying that appearance was true, just referring to the fact that it seemed true to thousands of people on Twitter who already hated Musk for his politics and jumped to conclusions after hearing about some rapid Ukrainian advances having their Starlink service cut out. The only plausible story of political interference (aside from sending the Starlink terminals at all) has been the claim he refused to disable Starlink geofencing for proposed Starlink-piloted suicide drones striking Crimea, out of fears of escalation.

alleged to have engaged in a little amateur diplomacy that resulted in his publicly proposing a settlement to the war that he had to have known the people he was ostensibly helping would find unacceptable

The article doesn't mention it but of course he has said exactly why he wants a settlement: he is concerned about a proxy war between the U.S. and Russia escalating into nuclear war and posing a major risk to humanity. His way of thinking here should be more understandable to this forum than most, since he has taken considerable inspiration from the same intellectual environment as LessWrong/Effective Altruism/Scott Alexander. His underlying motive is the same as his motive for Tesla/SolarCity (global warming), SpaceX (mitigate existential risk by making humanity a two-planet species), OpenAI (mitigate AI risk by having the developers take the risk seriously), NeuraLink (mitigate AI risk through interfaces between AI and the human brain), and Twitter (mitigate political censorship and the risks that enables). Not to mention sending the Starlink terminals to Ukraine in the first place, though that was more small-scale than his usual concerns.

He didn't try to personally negotiate a settlement because he sent the Starlink terminals and felt that gave him the right to, he would have done it anyway. He did it because, having made more money than he could ever personally use, he has been working to defeat what he perceives as threats to humanity. You might criticize his arrogance in believing he is capable of doing so, but Tesla and (especially) SpaceX have accomplished things that conventional wisdom considered impossible so it is perhaps understandable that he thought it was worth trying. There is obviously nothing wrong with criticizing him, I think he has made plenty of mistakes, but I wish people actually engaged with his reasoning rather than being like this article and rounding him off as Putin sympathizer or whatever.

During the pandemic, Musk seemed to embrace covid denialism, and for a while he changed his Twitter profile picture to an image of the [Deus Ex protagonist], which turns on a manufactured plague designed to control the masses. But Deus Ex, like “The Hitchhiker’s Guide to the Galaxy,” is a fundamentally anti-capitalist text, in which the plague is the culmination of unrestrained corporate power, and the villain is the world’s richest man, a media-darling tech entrepreneur with global aspirations and political leaders under his control.

I just skimmed the latter part of the article but this bit stood out. We get a "seemed to" and it's implied he...believes in a specific conspiracy theory because he once changed his Twitter avatar to the protagonist of an iconic videogame in which a bunch of conspiracy theories are true? But at the same time trying to claim Deus Ex as an anti-capitalist game that he is implied to be missing the point of? If Deus Ex is so leftist why does using it as a Twitter avatar signal a specific conspiracy theory rather than signaling leftism, not to mention signaling neither?

Industrial farming of animals requires feeding them, and thanks to thermodynamics this is dramatically less efficient than growing food for humans directly. (Theoretically you can raise some grass-fed cattle on grassland that already exists without clearing new land but this does not scale and still kills the cattle themselves. Note that labeling beef as "grass-fed" does not mean they get their food exclusively from pasture, it includes feeding them hay which itself has to be harvested.) You don't need to throw up your hands and act like there's no way to know if there's more animal death/suffering required for beef or bread, various rough estimates like this are enough to show the intuitively obvious answer is correct.

This is just "The Pyramid and the Garden". People aren't good at properly adjusting for the level of cherrypicking and degrees of freedom possible when you have thousands of people scouring a large world for evidence matching their pet theory.

A photoshoot for a fashion company reuses a "legal documents" prop from the shooting of a television drama as "office documents", the same company sells fashion that is vaguely leather-daddy inspired and didn't segregate it from photoshoots with children, and you conclude that "we are ruled by satanic pedophiles". (And they are deliberately embedding evidence about this in fashion photoshoots for some reason.) If you lived in a tribe of a few dozen people and happened to personally notice two coincidences like that about a single person, maybe that would be reason to be suspicious. But you don't, you live in a society of hundreds of millions where thousands of people spend time hunting down and broadcasting stuff like this for your perusal. As a result this doesn't even really tell us about Balenciaga's marketing department, let alone "society". But people's brains don't adjust like that, so give them a few coincidences like this and they'll either come to believe false things or dismiss it out of hand as a conspiracy theory. And then the ones who do the latter are still vulnerable to the same mistakes in reasoning when packaged in ways that don't register as "conspiracy theory", especially ones spread by mainstream media sources.

Option c: blacks are 5x as much "overpoliced" or white criminals are 1/5 as likely to be caught and convicted.

Studies based on the National Crime Victimization Survey show a close match between the racial demographics of criminals as reported by those claiming to have been victimized and the racial demographics of those arrested for those crimes. The 13/53 figure is specifically based on murder and is thus technically not covered since murder victims cannot be surveyed, but violent crimes in general are included and show a similar but somewhat lesser disparity. (Generally the racial disparity is larger the more violent and severe the crime is, so murder has a larger disparity than violent crime in general, which has a larger disparity than crime in general. So while the crime victimization survey also doesn't cover crimes without victims, those have a smaller disparity to begin with, and white criminals with victims answering the National Crime Victimization Survey don't seem to be getting away with it more.)

He is likely referring to this from pages 11-12 of the GPT whitepaper:

GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, after the post-training process, the calibration is reduced (Figure 8).

In any case, the articles you quote are oversimplified and inaccurate. Predicting text (and then satisfying RLHF) is how it was trained, but the way it evolved to best satisfy that training regime is a bunch of incomprehensible weights that clearly have some sort of general reasoning capability buried in there. You don't need to do statistical tests of its calibration to see that, because something that was truly just doing statistical prediction of text without having developed reasoning or a world-model to help with that task wouldn't be able to do even the most basic reasoning like this unless is already appeared in the text it was trained on.

It's like saying "humans can't reason, they're only maximizing the spread of their genes". Yes, if you aren't familiar with the behavior of LLMs/humans understanding what they evolved to do is important to understanding that behavior. It's better than naively assuming that they're just truth-generators. If you wanted to prove that humans don't reason you could point out all sorts of cognitive flaws and shortcuts with obvious evolutionary origins and say "look, it's just statistically approximating what causes more gene propagation". Humans will be scared of things like spiders even if they know they're harmless because they evolved to reproduce, not to reason perfectly, like a LLM failing at Idiot's Monty Hall because it evolved to predict text and similar text showed up a lot. (For that matter humans make errors based on pattern-matching ideas to something they're familiar with all the time, even without it being a deeply-buried instinct.) But the capability to reason is much more efficient than trying to memorize every situation that might come up, for both the tasks "predict text and satisfy RLHF" and "reproduce in the ancestral environment", and so they can do that too. They obviously can't reason at the level of a human, and I'd guess that getting there will involve designing something more complicated than just scaling up GPT-4, but they can reason.

White nationalism doesn't just mean "pro-white", it is generally defined by its advocates as including a desire for the existence of white ethnostates. It's like conflating "cares about jewish people" and "zionist": many jews believe zionism harms jewish people instead of helping them (and doing it with white nationalism is even less accurate because zionism is currently more mainstream).

It's not just a matter of prioritization but of beliefs about the world. There are plenty of normal people who genuinely think that racial diversity benefits everyone, including white people. Furthermore, even within the realm of people who both know about HBD and think it potentially justifies government discrimination on the basis of race, most are not white nationalists. For instance white nationalists have termed Emil Kirkegaard an "IQ nationalist", though in the linked post he ends up concluding that explicit IQ nationalism would just amount to much the same thing as skilled worker laws, and the important thing is keeping out the far-below-average immigrants without IQ tests or racial discrimination being nessesary. Even if you go to a more populist community like /pol/, there are both white nationalists who think each race should get its own ethnostates, but also plenty of people who only have an issue with specific races like black people and don't care about racial separation otherwise. If your definition of "white nationalist" includes people who want to ban black immigration but allow mass-migration from Hong Kong, on the basis that they believe that such immigration would benefit everyone in the destination country including white people, it's not going to be very recognizable to conventional white nationalists.

Right, but the probability of success seems more than high enough to compensate. Not only is 50% blue better than 95% red, it's also easier because you only need 50% instead of 95%. It's especially high if communication is allowed, but even without communication "the most obviously pro-social option" is a natural Schelling point.

Now this is fairly fragile, it's plausible that with different question wording or a society with a more cynical default conception of other people (Russia?) or the wrong set of memes regarding game theory red would seem enough of a natural Schelling point to make aiming for blue not worth it. This would of course be a worse outcome, so if you did have access to communication it would make sense to rally people around blue rather than red if doing so seems feasible.

You're comparing diagnoses per year for those 6-17 to number of children. You have to multiply the yearly figure by 12 for the whole time period. The U.S. population 6-17 is apparently 49,466,485, which would put the percentage who end up with gender-dysphoria diagnoses before the age of 18 at 1.02%.

There's a timeline (from the anti-Atheism+ perspective) here. The two things that made it blow up was when Watson "called out" Stef McGraw and then when Dawkins responded to a blog post defending that calling out. The original negative responses to Watson's video were just some Youtube comments, Stef McGraw's blog post, and Rose St. Clair's video response. Stef was a student who posted a blog post disagreeing with the idea that the encounter was an example of sexism. Watson, giving a talk at the CFI Student Leadership Conference, mentioned Stef was in the audience, called out her "parroting of misogynistic thought", conflated fear of "sexual objectification and assault", and claimed people like her were scaring women away from atheist conferences:

Because there are people in this audience right now who believe this: that a woman's reasonable expectation to feel safe from sexual objectification and assault at skeptic and atheist events is outweighed by a man's right to sexually objectify her. That's basically what these people have been telling me, and it's not true.

Since starting Skepchick I've heard from a lot of women who don't attend events like this because of those who have this attitude. They're tired of being objectified, and some of them have actually been raped; quite a number of them have been raped, or otherwise sexually assaulted. And situations like the one I was in, in an elevator, would have triggered a panic attack. They're scared, because they know that you won't stand up for them. And if they stand up for themselves, you are going to laugh them back down. And that's why they're not coming out to these events.

The call-out provoked some criticism on Twitter, and Watson responded with a blog post defending her actions and calling out some other people like Rose St. Clair and CFI intern Trevor Boeckmann. More criticism followed, such as Abbie Smith's Bad Form, Rebecca Watson blog post and McGraw's own response. This in turn provoked a bunch of blog posts supporting Watson's actions, such as PZ Myers's "Always Name Names!". In the comments for "Always Name Names", Richard Dawkins made his famous "Dear Muslima" comment mocking the idea that being asked to have coffee together at a conference was an example of sexism. (It is sometimes characterized as being a "don't complain because things are worse elsewhere" argument, but his other comment specifically said that wasn't his point and explained his reasoning.) This got too many blog posts to count calling him a misogynist and so on and got Watson to say she would boycott his work.

Often when Elevatorgate is summarized from the pro-social-justice side it's described as if Watson just made the comparatively mild original video and the atheism/skepticism community blew up at her, but what really got it going was how she responded to those like McGraw who disagreed. As well as ramping up her condemnation of the original interaction. (Something many of her supporters took even further, such as Amanda Marcotte arguing that Elevator Guy's invitation amounted to a rape-threat.)

If Russia invaded Alaska and said "if you shoot back at our soldiers we will launch nuclear weapons", letting them conquer Alaska would be better than a nuclear exchange. Nonetheless the U.S. considers "don't invade U.S. territory" a red line that they are willing to go to war with a nuclear power to protect. The proposal would be to establish the hypothetical anti-AI treaty as another important red line, hoping that the possibility of nuclear escalation remains in the background as a deterrent without ever manifesting. The risk from AI development doesn't have to be worse than nuclear war, it just has to be worse than the risk of setting an additional red line that might escalate to nuclear war. The real case against it is that superhuman AI is also a potentially beneficial technolgy (everyone on Earth is already facing death from old-age after all, not to mention non-AI existential risks), if it was purely destructive then aggressively pursuing an international agreement against developing it would make sense for even relatively low percentage risks.

No. To quote a post I made in response to someone expressing the same concern:

Is the thing you're afraid of the idea that quantum immortality would involve something like a near-eternity of horrible lives where you're almost but not quite dead? Because if so, I think you're badly misjudging the probability distribution. Those situations are associated with quantum immortality only because they're so incredibly unlikely that if they happen it'll be obvious that quantum immortality is true - but by definition that means they are absurdly unlikely to happen! Something like "you get shot and almost die, but random quantum fluctuations cause a lump of graphite to spontaneously appear inside your chest and barely stop the bleeding" are unlikely on a truly cosmic scale, even under the logic of quantum immortality it only matters if it's the only future where you don't die. And that sort of quantum immortality would require it happen again and again, multiplying the improbability each time.

Even if quantum immortality is true, anything the slightest bit plausible will completely dominate the probability distribution. There is no reason that technology granting near-immortality is impossible, so in virtually every Everett branch where you survive the reason is just that the technology is invented and you use it. Which is generally going to correspond to a technologically advanced and prosperous society. Quantum immortality wouldn't feel like a series of staggering coincidences barely preserving your life, it would feel like living in a universe where everything went surprisingly well. Billions of years from now your society is harvesting energy from black holes and maybe occasionally during get-togethers with your friends you debate whether this outcome was unlikely enough that quantum immortality is probably true.

Is this just confusion about the Twitter staffer's unclear grammar? The "not" in that sentence refers to the "he's saying" part, not the "voted for him" part. Another way to say it would be "It's pretty clear he's referring to people who voted for him, not the rioters". The Twitter staffer was not denying that the rioters were a subset of the voters, he was claiming they were not the group Trump was referring to, because Trump was referring to the set of all Trump voters.

I think the unnessesary "and" might be adding more ambiguity to an already ambiguous sentence, would it have been clearer if he said "He's saying the "American Patriots" are the ones who voted for him, not the terrorists"? Of course it also comes from whatever the grammatical term is for the thing where you omit the verb-phrase in the second half rather than repeating it from the first half, it would have been clear if he said "It's pretty clear he's saying the "American Patriots" are the ones who voted for him, not saying the "American Patriots" are the terrorists"). For instance:

https://www.usingenglish.com/forum/threads/omitting-a-verb-when-it-appears-the-second-time.170698/

Sheet 1 of the attached file shows the data on the male students and Sheet 2 the female students.

And then all the people replying to you are confused because they don't understand that you're interpreting the "not" as meaning "the rioters are not Trump voters" and think you mean that referring to a superset necessarily must be referring to each individual subset.

Despite being based on GPT-4 Bing is apparently well-known for performing dramatically worse. There have been some complaints of GPT-4's performance degrading too, presumably due to some combination of OpenAI trying to make it cheaper to run (with model quantization?) and adding more fine-tuning trying to stop people from getting it to say offensive things, but hopefully not to the extent that it would consistently fail that sort of world-modeling. (If anyone with a subscription wants to also test older versions of GPT-4 it sounds like they're still accessible in Playground?)

I don't think it's plausible that all the examples of GPT-4 doing that sort of thing are faked, not when anyone shelling out the $20 can try it themselves. And people use it for things like programming, you can't do that without reasoning, just a less familiar form of reasoning than the example I gave.

Gender = a set of roles, behaviors, etc, generally expected by society of the members of each sex.

Using "gender" to refer to this instead of "gender roles" or "gender stereotypes" conflates it with the idea of "gender identity". The idea of "gender identity" claims that people have some inherent deeply-rooted "true" gender separate from both their bodies and what societal roles they fulfill or stereotypes they match. This then comes with a whole set of ideas about "misgendering", about "deadnaming" if the non-binary identification accompanied a request for a name change, etc.

Note that actually basing it off gender roles would be completely different, for one because it would be based on society rather than the individual. Nobody advocates calling every woman in the military "he" even though the job is a male gender role. Similarly someone might believe in stereotypical correlations such that he's surprised to see a female programmer of a white NBA player, but that doesn't mean he thinks those people are actually becoming male or black, not even partially.

Given that this transition would be pretty expensive and the main benefit is getting to invest in the private market, the counter is: why not just let the government invest in the private market?

I'm not an economist, but doesn't this just amount to increasing the money supply in a way that makes the government responsible for more direct investment decisions? The government (and Federal Reserve) already control the size of the money supply, what makes it better than increasing investment some other way like lowering interest rates or qualitative easing? The linked article talks about higher returns, but money doesn't create wealth, investment of actual resources creates wealth and money decides where those resources go. Right now the money is forcibly invested in government treasuries, which seems identical to the money ceasing to exist for a period of some decades. Since the money is simultaneously collected and paid out, and the amount paid is currently larger, this represents money creation, as well as obviously trasfer to the elderly. If in between it was also invested, this would constitute a lot more money creation, which in general can be done in other ways and right now does not seem like what the economy needs. I guess the main other thing it would do is change the ratio of investment and consumer spending, is that currently desirable in the U.S.? The linked article doesn't say, instead it talks about monetary "returns" to the entity that already prints the money.

Scientific American: Coal Ash Is More Radioactive Than Nuclear Waste

the study i linked found that for the energy generated, more radiation is given out by fly ash, which contains trace amounts of uranium and thorium. while the amount of radiation that makes it into people from both of these sources isn't dangerous, it's worth pointing out when given the concerns of "gonna be irradiated."

The title of that article is laughably false. The underlying point it is based on, that under normal operation a nuclear plant releases less radioactive material into the environment than a coal plant, is technically true but grossly misleading. Under normal operation nuclear plants release essentially no radioactive material, the radioactivity concern is concentrated purely into the possibility of something going wrong. Sanity-check I did after encountering this argument a decade ago:

The EPA gives the radioactivity of average fly ash as 5.8 picocuries per gram, and the U.S. produces around 125 million tons of coal-combustion byproducts per year as of 2006. If we overestimate and assume all coal-combustion byproducts are the more-radioactive fly ash, that comes to around 658 curies worth of material per year. By comparison, a year after the Fukushima disaster TEPCO estimated total radiation releases as 538,100 terabecquerels - equivalent to 14,543,243 curies. Note that this assumes all fly ash is being released into the environment when modern first-world plants safely capture most of it. So one year after the Fukushima disaster it had already released more radiation than 22,000 years of 2006-era U.S. coal radiation emissions, under very pessimistic assumptions. Which means we can confidently estimate Fukushima has released far more radiation than all the coal burned in human history and all the coal remaining in the ground that could be burned combined.

This doesn't mean that nuclear power is overall a bad idea, but it's definitely not because coal is a worse radioactivity concern. From what I've heard this particular misleading talking point has been going around even before it started circulating on the internet, I remember someone telling me that it was going around Stanford decades ago. People should be cautious with counterintuitive factoids like this, because often they spread because they are too good to check.

Speaking personally, I have no redneg identity whatsoever. Although born as male, I feel neither as a nam nor as a namow, much like atheists who no longer feel as a believer.

That's just normal, it doesn't set you apart from the general public. It's only unusual in that most people who encounter the concept of gender identity aren't introspective enough to think about whether they actually have an internal sense of such a thing and don't have enough contrarian tendencies to call bullshit. To quote a comment I made a year ago:

The concept of "cis gender identity" wasn't created because anyone investigated whether such a thing actually existed and found that it did, it was created by trans activists as a deliberate mirror-image of their model of transgenderism. Someone who believed in the concept hard enough and was introspective about it would probably notice that he doesn't actually have any internal sense of what his "gender identity" is and decide that makes him "non-binary" or something, when really it just makes him normal. Or look until he interpreted random noise as a sign of some gender identity or other. After all, as the British charity Mermaids has taught us (and taught the organizations like schools and police that have taken their training), not being Barbie or GI Joe means you're towards the middle of the "gender spectrum", and by the way gender dysphoria can present as any and all problems you might have.

Some people are asking whether people who accepted FTX money should have “seen the red flags” or “done more due diligence”.

I find this stuff really obnoxious. Since when has it ever been the job of charities to investigate the businesses of the people donating them money? EA or not, what charity does this? It would be a ridiculous waste of time and money, it's not their job and specialization exists for a reason. People are talking like it's some deep failing that they didn't find him suspicious and refuse his money, but just how many legitimate donors should they be willing to refuse as "suspicious" for the sake of avoiding a criminal? Not that it would have been practical anyway, EA-endorsed charities are not some unified group and a lot of his "EA" donations were stuff like directly supporting political candidates who promised to do something about pandemic preparedness

We're not talking about Sequoia Capital, the venture-capital firm that has now written down $214 million in FTX equity, had access to internal information, and actually had a duty to their investors to try to avoid this sort of thing. Similarly we're not talking about their other institutional investors like Blackrock, the Ontario Teacher's Pension Plan, Tiger Global Management, Softbank Group, Lightspeed Venture Partners, and Temasek. We're not talking about the state of Miami selling them the naming rights to a stadium for $125 million dollars, giving them a lot more advertising than some blog posts saying "this billionaire supports EA, great!". Somehow EA is held to a much higher standard than any of these, even though it seems obvious to me that accepting donations should be held to dramatically lower standards than investing teacher's retirement money. EA should focus on effective charity, that is already a sufficiently-ambitious specialty, it shouldn't focus on doing unpaid amateur investment analysis trying to beat institutional investors at their own jobs for the sake of refusing donations that might turn out to be from a criminal.

Because democracy isn't just an arbitrary principle, it's a political technology for nonviolent resolution of unrest. People who live in your country but don't vote can still riot, can still strike, and can still join insurgent groups. There are ways to suppress the majority, but they are much more difficult and costly to the country than simply having them continue to live far away outside your borders where they can't readily do those things.

In democracies those tactics are mainly relegated to groups with minority political views that can't win at the ballot box, and sometimes they get their way by caring more than the majority or having elite sympathizers, but most of the time it is advantageous to just participate in the democratic system instead. This has made democracies remarkably stable compared to other political systems. Your proposal, on the other hand, seems like it would fall to a Ghandi-style resistance campaign or violent revolution the first time there was a serious dispute between the natives and the disenfranchised descendants of immigrants.

Both. Mostly I was contrasting to the obverse case against it, that risking nuclear escalation would be unthinkable even if it was a purely harmful doomsday device. If it was an atmosphere-ignition bomb being developed for deterrence purposes that people thought had a relevant chance of going off by accident during development (even if it was only a 1% risk), then aggressively demanding an international ban would be the obvious move even though it would carry some small risk of escalating to nuclear war. The common knowledge about the straightforward upside of such a ban would also make it much more politically viable, making it more worthwhile to pursue a ban rather than focusing on trying to prevent accidental ignition during development. Also, unlike ASI, developing the bomb would not help you prevent others from causing accidental or intentional atmospheric ignition.

That said, I do think that is the main reason that pursuing an AI ban would be bad even if it was politically possible. In terms of existential risk I have not read The Precipice and am certainly not any kind of expert, but I am dubious about the idea that delaying for decades or centuries attempting to preserve the unstable status-quo would decrease rather than increase long-term existential risk. The main risk I was thinking about (besides "someone more reckless develops ASI first") was the collapse of current civilization reducing humanity's population and industrial/technological capabilities until it is more vulnerable to additional shocks. Those additional shocks, whether over a short period of time from the original disaster or over a long period against a population that has failed to regain current capabilities (perhaps because we have already used the low-hanging fruit of resources like fossil fuels) could then reduce it to the point that it is vulnerable to extinction. An obvious risk for the initial collapse would be nuclear war, but could also be something more complicated like dysfunctional institutions failing to find alternatives to depleted phosphorous reserves before massive fertilizer shortages. Humanity itself isn't stable, it is currently slowly losing intelligence and health to both outright dysgenic selection from our current society and to lower infant mortality reducing purifying selection, so the humans confronting future threats may well be less capable than we are. Once humans are reduced to subsistence agriculture again the obvious candidate to take them the rest of the way would be climate shocks, as have greatly reduced the human population in the past.

Furthermore, I'm not that sympathetic to Total Utilitarianism as opposed to something like Average Preference Utilitarianism, I value the preferences of those who do or will exist but not purely hypothetical people who will never exist. If given a choice between saving someone's life and increasing the number of people who will be born by 2, I strongly favor the former because his desire to remain alive is real and their desire to be born is an imaginary feature of hypothetical people. But without sufficient medical development every one of those real people will soon die. Now, wiping out humanity is still worse than letting everyone die of old age, both because it means they die sooner and because most of those people have a preference that humanity continue existing. But I weigh that as the preferences of 8 billion people that humanity should continue, 8 billion people who also don't want to die themselves, not the preferences of 10^46 hypothetical people per century after galactic colonization (per Bostrom's Astronomical Waste) who want to be born.

Back on the object level of the issue at hand, for all it's worth, reports of Russians abusing or executing PoWs so far - especially after the chaos of the first few days - are very thin on the ground, despite what I assume must have been a very large number of people looking very hard for evidence.

I think the main reason you haven't heard about it is that Russia's torture of both POWs and civilians is so routine and well-known that it isn't considered very newsworthy.

https://en.wikipedia.org/wiki/Russian_torture_chambers_in_Ukraine

https://thehill.com/policy/international/3543197-inside-russias-war-camps-ukrainian-pows-detail-torture-abuse/

https://www.cnbc.com/2022/10/28/russia-ukraine-war-un-report-details-accounts-of-rape-torture-and-executions.html

You're claiming rationalism thinks that philosophical debates about "free will" are meaningful and have relevance to assigning personal responsibility? By "rationalism" meaning LessWrong people? I don't believe you. And sure enough, I searched on LessWrong and found that not only does Eliezer Yudkowsky not agree with those like Barbara Fried, in 2008 he called it so trivial that it served as a "practice question" for aspiring rationalists, then later elaborated on his solution at further length.

https://www.lesswrong.com/tag/free-will

It's not like this is even restricted to one of the more obscure "sequences", I've seen "Dissolving the Question" cited elsewhere. Look, I'm sure there's people on /r/badphilosophy who would mock the arrogance of thinking you have easily dissolved a famous philosophical dilemma, and that would be annoying in its own way. But at least it would display at least some surface-level engagement with what was said. Grouping LessWrong in with a philosopher arguing that some inane sophistry about "free will" means we should adopt her preferred criminal-justice policy positions reflects an incredible lack of understanding of what they believe and how they think. Barbara Fried is engaging in the exact sort of thing that makes LessWrong people have such a poor opinion on the health of philosophy as a field.

I am discussing only #1, which is true by the definition of "gender"

Well, not really. In standard usage it is a synonym for "sex".

https://www.etymonline.com/search?q=gender

The "male-or-female sex" sense of the word is attested in English from early 15c. As sex (n.) took on erotic qualities in 20c., gender came to be the usual English word for "sex of a human being," in which use it was at first regarded as colloquial or humorous. Later often in feminist writing with reference to social attributes as much as biological qualities; this sense first attested 1963. Gender-bender is from 1977, popularized from 1980, with reference to pop star David Bowie.

Most people are not engaging in feminist academic writing, and so they usually do not imitate their confusing use of "gender". Everything from conversations with normal people to news articles to government/corporate forms will use "gender" and "sex" interchangeably. Meanwhile the "identify as" definition is sufficiently new that it isn't even mentioned. Needless to say, people are not obliged to use that one either, especially since it is typically used to smuggle in the contested assumption that people have an internal feeling of "gender identity".

It that they identify as male.

Then how does that follow from defining gender as "a set of roles, behaviors, etc, generally expected by society of the members of each sex"? Under this definition is it meaningful to say things like "Andriy falsely identifies as a man, since by fleeing Ukraine as a woman rather than staying she is refusing her society's able-bodied male gender role of staying in case she is needed to fight and die against Russia"? Or does it mean that gender self-identification is true by definition, in which case it is not a reflection of society's gender roles but of the "gender identity" definition?