@sodiummuffin's banner p

sodiummuffin


				

				

				
2 followers   follows 0 users  
joined 2022 September 05 03:26:09 UTC

				

User ID: 420

sodiummuffin


				
				
				

				
2 followers   follows 0 users   joined 2022 September 05 03:26:09 UTC

					

No bio...


					

User ID: 420

Mostly the posts are about the censorship aspect of the story, which is naturally going to attract comments by people who oppose censorship on principle and/or have specific examples they object to. Even people who support censorship tend not to be as passionate about censoring COVID-19 stuff as they are about something like "hate speech", so it's not going to get a bunch of comments about not "tolerating the intolerant" or whatever.

In any case your post comes across as obnoxious bulverism, seemingly taking for granted that disagreement with your position is driven by irrational fear or "Mass Formation Psychosis". You don't even really explain what your position is (No lockdowns whatsoever? Lesser lockdowns? Lockdowns until 3 weeks after vaccines were available to all and not a day longer? Lockdowns implemented voluntarily by non-governmental organizations but not any by the government? Government campaigns against social distancing so it doesn't drag on due to voluntary behavior? Better-targeted lockdowns that don't do useless things like restrict borders after it is already spreading domestically?) let alone explain why you have that position. Are there views you don't agree with but also think are a normal mistake rather than psychosis? Is this about specific views or are you postulating a bias towards pro-lockdown views without necessarily asserting they are actually incorrect? Your post doesn't make any of this clear, it doesn't have much content at all, it just points to a thread with people expressing views closer to your own and postulates that this shows people are getting over the madness that made them disagree with you.

perhaps the most prominent advocate of such giving in the minds of the general public. SBF, in a sense, was EA’s face to the normies.

As far as I can tell this isn't true. I was somewhat familiar with EA and I'd still never heard of him before, though I think I saw a mention of the FTX Future Fund in a Scott post or something. Not that long ago I remember reading people worrying that perception of EA was being too driven by longtermism due to the publication of What We Owe the Future, if that was true maybe William MacAskill qualifies as the public face. But more likely it's still just Peter Singer. SBF donated a pile of money and got mentioned in some news articles because of it but he wasn't writing books or articles about EA, he wasn't giving talks about it, he wasn't representing the EA position in panel shows. Searching old news articles the main place his support for EA got mentioned was in puff pieces about him personally. Now, they might get associated in the public imagination in retrospect, since what happened with FTX is big news and there's some articles about him and EA, but back when he was just some philanthropic billionaire I don't think he was a big part of the public perception.

He was referring to what the protesters did, not what they were specifically charged with. Even if protesters who did no more than trespass were charged with "Treason" that would not undermine his point about them being treated differently from how trespassing protesters are usually treated. As it happens, what a lot of them was charged with was "Obstruction of an official proceeding", an incredibly broad law created for people trying to interfere with criminal investigations/trials that had never been used in this way before. The Wikipedia article is divided into "Use prior to 2021" and "2021 U.S. Capitol attack" for a reason. And then, even if you think that undermines the point somehow, the specific guy he linked was just charged with "entering a restricted area", which is even more overtly about him trespassing.

My hypothesis would be that anti-White statements of this magnitude and timing aren't nearly so common (or perhaps even existent) among people in the "head of a broad public first-world organization" category.

Off the top of my head some of the public statements about the race-motivated prioritization of the COVID-19 vaccine would seem to contradict this. Not to mention it actually becoming U.S. government policy and killing many thousands of people. There are probably closer analogues, but I remember that particular one well and wrote this post about it at the time:

The CDC has officially recommended ACIP's vaccine distribution plan that deprioritizes the elderly, even though they estimate this will save less lives, in part because more elderly people are white

The most overt quote mentioned in that post would be this one:

The New York Times: The Elderly vs. Essential Workers: Who Should Get the Coronavirus Vaccine First?

Harald Schmidt, an expert in ethics and health policy at the University of Pennsylvania, said that it is reasonable to put essential workers ahead of older adults, given their risks, and that they are disproportionately minorities. “Older populations are whiter, ” Dr. Schmidt said. “Society is structured in a way that enables them to live longer. Instead of giving additional health benefits to those who already had more of them, we can start to level the playing field a bit.”

Or from the same article a quote from a member of the ACIP committee (the people responsible for writing the CDC's recommended prioritization):

Historically, the committee relied on scientific evidence to inform its decisions. But now the members are weighing social justice concerns as well, noted Lisa A. Prosser, a professor of health policy and decision sciences at the University of Michigan. “To me the issue of ethics is very significant, very important for this country,” Dr. Peter Szilagyi, a committee member and a pediatrics professor at the University of California, Los Angeles, said at the time, “and clearly favors the essential worker group because of the high proportion of minority, low-income and low-education workers among essential workers.”

I think even the dry language of ACIP itself would be beyond the pale, like when they list "Racial and ethnic minority groups under-represented among adults >65" in red as a reason to not prioritize them. If it was instead "Whites under-represented" or "Jews over-represented" I do not think they would have remained in charge of writing the CDC's recommendations, nor do I think states would have adopted those recommendations.

You could argue that the issue is just that killing tens/hundreds of thousands through healthcare policy is much less dramatic that killing thousands through direct violence, even when the healthcare policy is explicitly racially motivated. That is the main reason I said the analogy is not particularly close. But at the same time saying "Israel bears full responsibility for this tremendous loss of life." is less extreme than actually saying that loss of life is a good thing, let alone using your position in the U.S. government bureaucracy to deliberately cause that loss of life and being permitted to do so.

Regardless of exactly where the line is for anti-white statements and (more importantly) anti-white policies, it is obvious that they would not and could not have done something like this in the name of increasing black or jewish deaths instead. It is the product of explicit institutional racial bias. (Note that their policy actually did kill more black people because of how much more vulnerable the elderly are, it just killed even more white people so the proportion of the deaths was more white. And naturally it killed more jewish people as well.) Of course, that doesn't prove anything about the ordering of favored groups against each other like the OP was arguing. It just shows that social justice disfavors white people and is influential enough to shape the decisions of institutions like the CDC/ACIP and the states that followed their recommendations or prioritized by race outright.

The highest position on the progressive totem pole is being Jewish, not black or trans.

This would only follow if, for instance, there was a massacre of black people and jews could make similar remarks about the massacre without being fired. Obviously white people are lower, but that doesn't tell us anything about the ordering of the favored groups, or whether they are ordered in any sort of consistent way to begin with.

I think this is the intended line of thinking, but red doesn't require any cooperation: pure self-interest can grant it too.

The issue is the extreme difficulty of that level of coordination, not their specific motives. Imagine I said "coordination" instead of "cooperation" if you prefer. If you place an above-zero value on the lives of people who might press blue, then the optimal outcome is either >50% blue or exactly 100% red, with every other possibility being worse.

You can't rely on 100% to do pretty much anything, including act on self-interest. People in real life do things like commit suicidal school shootings, and you have to make decisions taking that into account. As I pointed out, even most mundane crime is self-destructive and yet people do it anyways. In this case, as people have pointed out, some people will pick blue by mistake, because they are momentarily suicidal enough to take a risk even though they wouldn't carry out a normal suicide, or (most of all) because they realize the above and want to save everyone.

My problem is, while I'm sure that not all the examples of GPT-4 seeming to get complex reasoning tasks are fake, if they cannot be replicated, what good are they?

I am saying they can be replicated, just by someone who unlike you or me has paid the $20. I suppose it is possible that the supposed degradation in its capabilities has messed up these sorts of questions as well, but probably not.

If GPT-4's ability to "reason" is ephemeral and seemingly random, is it really reasoning, or is it just occasionally getting lucky at ordering abstract tokens for it's monkey overlords?

There is a big difference between random guessing and having a capability that sometimes doesn't work. In particular, if the chance of randomly getting the right result without understanding is low enough. Text generators based on Markov chains could output something that looked like programming, but they did not output working programs, because such an outcome is unlikely enough that creating a novel program is not something you can just randomly stumble upon without some idea of what you're doing. In any case, as far as I know GPT-4 is not that unreliable, especially once you find the prompts that work for the task you want.

Which makes sense to me, because a lot of those tests involve more generalized and flexible reasoning than the sorts of formalized mathematical logic examples it might plausibly be trained to imitate.

How well it reasons is a different question from whether it reasons at all. It is by human standards very imbalanced in how much it knows vs. how well it reasons, so yes people who think it is human-level are generally being fooled by its greater knowledge. But the reasoning is there and it's what makes a lot of the rest possible. Give it a programming task and most of what it does might be copying common methods of doing things that it came across in training, but without the capability to reason it would have no idea of how to determine what methods to use and fit them together without memorizing the exact same task from elsewhere. So practical use is generally going to involve a lot of memorized material, but anyone with a subscription can come up with novel questions to test its reasoning capabilities alone.

brought them to vast new audiences

Did he? How many people watch Alex Jones but aren't familiar enough with the conspiracy-theory community to have encountered an extremely popular conspiracy theory? And of course the grassroots conspiracy theorists had a lot more detail and arguments too, unlike Alex Jones vaguely referencing the claims that were already widespread. For that matter the first page of that 4plebs search in 2013 has a screenshot of Robbie Parker's former phone number, certainly not something Alex Jones shared and from before he even started referencing "actors". We hardly need Alex Jones to explain why he got harassing phone calls. Even if there's a significant audience of casual conspiracy-theorists who watch Alex Jones but aren't in contact with the rest of the conspiracy theory community, it seems like those would be the least likely to act on that information.

From what I can tell the modern conspiracy-theorist community is fundamentally very grassroots, a distributed effort to accumulate the sort of seemingly-convincing evidence and arguments described in The Pyramid and the Garden. Not that non-conspiracy-theorists are immune to this, most of them will accept similarly bad evidence under other circumstances, they're usually just using heuristics like "reject things called conspiracy theories by mainstream sources" which fail as soon as something true is called a conspiracy theory or a false conspiracy theory is treated seriously by the mainstream. E.g. I remember people on 4chan sometimes thinking posts they didn't like were "bots" even when this was technologically very implausible, and then years later I saw the habit of accusing opposing posters of being "Russian bots" on sites like Twitter and Reddit go mainstream. (Complete with popular ideas like "You can tell they're Russian bots because their usernames end with 8 numbers on Twitter" - of course the actual reason is because that's Twitter's format for suggested usernames.) Anyway, maybe the conspiracy-theorist community used to be more centralized but nowadays very few conspiracy theories originate or are even popularized by some identifiable leader, they're just networks of people who combine the same mistakes in reasoning most people make with a distrust of official sources.

a group that is harder to bring to court for defamation for reasons that should be obvious

Right. But it doesn't seem like you should get to legally treat the guy who happens to be the most prominent conspiracy-theorist as a scapegoat just because there's nobody else to sue. Defamation law doesn't have a mechanism to crack down on communities of people with mistaken ideas, and rightly so.

The starting point was you saying that people who aren't white nationalists don't "care about white people", and that the reasons for this are sufficiently obvious that even people with drastically different beliefs about the world wouldn't disagree with white nationalism otherwise. You're now talking about how allowing even highly selective non-white immigration could result in intermarriage that results in...the white population ending up with some fraction of a percent of east-asian ancestry? I'm not seeing how this is harmful, and I certainly don't think it is so self-evidently harmful that even people who disagree with you realize it is harmful.

Meanwhile, on a timeframe like that there are far more important factors to focus on. Obviously there are the non-selective forms of immigration, and the large racial minorities that already live in most majority-white countries. There is dysgenic evolutionary pressure costing around 1 IQ point per generation (along with lower conscientiousness, more ADHD, etc.), because modern society is currently set up so that the more successful you are the fewer children you have. And there is the rapidly-arriving promise of technologies like embryo selection or hypothetical future genetic engineering (or simply getting over the eugenics taboo and doing large-scale sperm donation), potentially allowing whichever group is willing to do it to tremendously improve themselves.

How many music videos actually have written or scripted reviews? Reaction videos have a lower barrier to entry than a blog post or scripted video, since you just have to watch and say what you're thinking, but a higher barrier and probably more detail than a Youtube comment. So if someone wants to hear what someone else thinks of a particular music video, they might be pretty much the only choice available. Also if a youtuber or streamer already has an audience they might be interested in what he has to say about something, even if they aren't very interested in the actual subject matter and it's low-effort content.

No, I meant to reply to cake's OP comment.

Off-world colonies don't need AI, and self-sustaining ones take an absolute sledgehammer to every X-risk except AI and dystopia (and aliens and God, but they're hardly immediate concerns). Dumb incentives for bio research can be fixed (and physics research, if and when we get to that). Dysgenics yields to PGT-P and sperm donors (although eugenics has some issues of its own).

Sure, but of course such measures being possible doesn't mean they'll actually be done.

Hell, even GOFAI research or uploads aren't likely to take much over a century, and would be a hell of a lot safer than playing with neural nets

This seems like too much certainty about the nature and difficulty of the task, which in turn influences whether significant delay actually increases the odds of success. For instance, if we turn out to live in a universe where superhuman AI safety isn't that hard, then the important thing is probably that it be done by a team that considers it a serious concern at all. Right now the leading AI company is run by people who are very concerned with AI alignment and who founded the company with that in mind, if we ban AI development and then the ban gets abandoned in 30 years there's a good chance that won't be the case again.

A candidate for such a universe would be if it's viable to make superintelligent Tool AIs. Like if GPT-10 can mechanistically output superhuman scientific papers but still doesn't have goals of its own. Such an AI would still be dangerous and you certainly couldn't release it to the general public, but you could carefully prompt it for papers suggesting more resilient AI alignment solutions. Some have argued Agent AIs would have advantages compared to Tool AIs, like Gwern arguing Tool AIs would be "less intelligent, efficient, and economically valuable". Lets say we live in a future where more advanced versions of GPT get routinely hooked up to other components like AgentGPT to carry out tasks, something which makes it significantly better at complicated tasks. OpenAI just developed GPT-10 which might be capable of superhuman scientific research. They can immediately hook it up to AgentGPT+ and make trillions of dollars while curing cancer, or they can spend 2 years tweaking it until it can perform superhuman scientific research without agentic components. It seems plausible that OpenAI would take the harder but safer route, but our 2050s AI company very well might not bother. Especially if the researchers, having successfully gotten rid of the ban, view AI alignment people the same way anti-nuclear-power environmentalists and anti-GMO activists are viewed by those respective fields.

Regarding talk of 100-year bans on AI while people steadily work on supposedly safer methods, I'm reminded of how 40 years ago overpopulation was a big mainstream concern among intellectuals. These ideas influenced government policy, most famously China's One Child policy. Today the fertility rate is substantially reduced (though mostly not by the anti-overpopulation activists), the population is predictably aging, and...the plan is completely abandoned, even though that was the entirely predictable result of dropping fertility. Nowadays if a country is concerned with ferility either way it'll want it to increase rather than decrease. Likewise the eugenics movement had ambitions of operating across many generations before being erased by the tides of history. In general, expecting your movement/ideas to retain power that long seems risky seems very risky.

people who do not and who never have existed can't be said to have "wants" in any meaningful sense

You should include people who will exist as well, as opposed to people who could potentially exist if you took other actions but will never actually exist. Otherwise something like "burying a deadly poison that you know will leach into the water table in 120 years" would be perfectly moral, since the people it will kill don't exist yet.

This kind of idiotic one-dimensional thinking is why I maintain that utilitarianism is fundementally stupid, evil, and incompatible with human flourishing.

As I mentioned, Preference Utilitarianism and Average Preference Utilitarianism are also forms of utilitarianism. And Total Utilitarianism doesn't imply wireheading either. Wireheading is only an implication of particularly literal and naive forms of hedonic utilitarianism that not even actual historical hedonic utilitarians would endorse, they would presumably either claim it isn't "real" happiness or switch to another form of utilitarianism.

Honestly, I think the main rhetorical advantage of non-utilitarianism forms of ethics is that they tend to be so incoherent that it is harder to accuse them of endorsing anything in particular. But people being bad at formalizing morality doesn't mean they actually endorse their misformalization's implications. You just tried to express your own non-utilitarian beliefs and immediately endorsed sufficiently-delayed murders of people who aren't born yet, that doesn't mean you actually support that implication. But having non-formalized morality is no advantage in real life and often leads to terrible decisions by people who have never rigorously thought about what they're doing, because you really do have to make choices. In medicine utilitarianism gave us QALYs while non-consequentialism gave us restrictive IRBs that care more about the slightest "injustice" than about saving thousands of lives, as a human who will require medical care I know which of those I prefer.

omnicide

The view he is expressing is of course the opposite of this - that humanity surviving until it ultimately colonizes the galaxy is so important that anything that improves humanity's safety is more important than non-omnicidal dangers. Of course that would still leave a lot of uncertainty about what the safest path is. As I argued, significant delays are not necessarily more safe.

My 1e999999999999999 hypothetical future descendants who see utilitarian AIs as abominations to be purged with holy fire in the name of the God-Emperor are just as real as your "10^46 hypothetical people per century after galactic colonization" and thier preferences are just as valid.

To be clear the "preference" framing is mine, since I prefer preference utilitarianism. Bostrom would frame it as something like trying to maximize the amount of things we value, such as "sentient beings living worthwhile lives".

The point isn't whether such an outcome is particularly likely, it's that it's more likely than being kept barely alive by a series of staggeringly unlikely macroscopic quantum events. The idea behind quantum immortality is that, if many-worlds is true and all the worlds in it are truly "real", there will always be some small subset of worlds where you continue existing so long as this is physically possible. And a lot of things are physically possible if you get into extremely unlikely quantum fluctuations. Since you don't experience the worlds where you are already dead, an increasing percentage of your remaining future selves would have experienced whatever unlikely events are required to keep you alive. When I said "your society" that wasn't meant to refer to any current society, it was meant to refer to the idea of surviving as part of a society at all. As opposed to most of your future copies surviving as the only remaining human in your universe, floating in space after the destruction of Earth and staying alive only because in some tiny fraction of the Everett branches splitting off each instant some oxygen/etc. randomly appears and and keeps you alive. Any future that doesn't require such a continuous series of coincidences will be a much larger fraction of the branches where you survive, and the most obvious such future is one where people deliberately invent the required technology. So whether quantum immortality is true or not, and whether or not you decide to care about the fate of future selves even if they only exist in a small fraction of branches, the expected outcomes of quantum immortality being true aren't the "kept barely alive by randomness" scenarios.

It is precisely the ability to convert between mild experiences and extreme experiences at some ratio that allows everything to add up to something resembling common-sense morality. If you don't, if the ranking of bad experiences from most mild to most severe has one considered infinitely worse than the one that came before, then your decision-making will be dominated by whichever potential consequences pass that threshold while completely disregarding everything below that threshold, regardless of how unlikely those extreme consequences are. You seem to be taking the fact that the risks in these hypotheticals are not worth actual consideration as a point against these hypotheticals, but of course that is the point the hypotheticals are making.

Moral reasoning is not really meant for such extreme numbers

Nothing in the universe will ever be 3↑↑↑3, but 7 billion people is already far beyond intuitive moral reasoning. We still have to make decisions affecting them whether our moral reasoning is meant for it or not. Which includes reacting differently to something bad happening to one person out of millions of beneficiaries than to one person out of hundreds of beneficiaries.

Has anyone ever experienced such nerve damage as a result of a decision they took? Do we know that it's even theoretically possible? I can't imagine that really any amount of carpal tunnel is actually equivalent to many years of deliberate torture, even if 3↑↑↑3 worlds exist and we choose the person who suffers the worst carpal tunnel out of all of them. So I'd probably say that this risk is literally 0, not just arbitrarily small.

In some percentage of cases the cancer spreads to your brain, you get surgery to remove the tumor, and the brain surgeon messes up in precisely the right way. Both "locked-in syndrome" and chronic pain are things that happen, it's hardly a stretch to think a combination of both that paralyzes you for 50 years while you experience continuous agony is physically possible. And of course even if you were uncertain whether it was physically possible, that's just another thing to multiply the improbability by. It's not that rounding the probability down to 0 doesn't make sense in terms of practical decision-making, it's that "1 in 3↑↑↑3" odds are unimaginably less likely, so you should round them down to 0 too.

If you sum up all of the suffering and give it to a single person, IMO the minimal suffering will add up to a lot less than the maximal suffering.

I do not think this is a meaningful statement. We can decide which scenario is preferable and call that something like "net utility" but we can't literally "add up" multiple people's experiences within a single person. It doesn't have a coherent meaning so we are free to arbitrarily imagine whatever we want. That said, to the extent that its meaning can be nailed down at all, I think it would favor avoiding the 3↑↑↑3 option. My understanding is that a single pain receptor firing once is not noticeable. If a form of suffering is instead barely noticeable, it is presumably "bigger" than a single pain receptor firing. There are only 37 trillion cells the the human body, so the number of pain receptors is something smaller than that. So the first step in multiplying barely-noticeable suffering by 3↑↑↑3 is that it goes from "worse than a pain receptor firing" to "worse than every pain receptor firing continuously for an extended period". And that doesn't make a dent in 3↑↑↑3, so we multiply further, such as by making it last unimaginably longer than merely 10^100 times the lifespan of the universe.

That is a pretty arbitrary and meaningless matter of interpretation though. A more meaningful measure would be the Rawlsian veil of ignorance, You're a random member of a population of 3↑↑↑3, is it better for you that 10^100 of them be tortured or all of them experience a dropped frame in a video? This is equivalent to what I answered in my previous post, that it would be foolish to sacrifice anything to avoid such odds.

Of course the sentence could have been clearer. It's sloppy conversational English relying on the reader to fill in part of the sentence which accidentally ended up having a more straightforward meaning that the writer did not intend, something akin to a garden-path sentence. If there was no context your interpretation would have been the more intuitive one. But there is context, and it's very unlikely that a Twitter employee would claim the rioters were all false flaggers rather than Trump voters, or argue it that particular way if he did. And I think that not only does my reading of it match what he meant, it matches how the other Twitter employees in the conversation interpreted it, how the reporters posting the conversation interpreted it, and how the people responding to you in this thread are interpreting it. So while it's a bit interesting that your reading of it is also possible based on the text it doesn't seem particularly significant.

Because democracy isn't just an arbitrary principle, it's a political technology for nonviolent resolution of unrest. People who live in your country but don't vote can still riot, can still strike, and can still join insurgent groups. There are ways to suppress the majority, but they are much more difficult and costly to the country than simply having them continue to live far away outside your borders where they can't readily do those things.

In democracies those tactics are mainly relegated to groups with minority political views that can't win at the ballot box, and sometimes they get their way by caring more than the majority or having elite sympathizers, but most of the time it is advantageous to just participate in the democratic system instead. This has made democracies remarkably stable compared to other political systems. Your proposal, on the other hand, seems like it would fall to a Ghandi-style resistance campaign or violent revolution the first time there was a serious dispute between the natives and the disenfranchised descendants of immigrants.

The binding force behind all "woke" modern movements is anti-whiteness.

A handful of years ago the most prominent SJW focus was feminism, by far. Race got some obligatory mumbling about intersectionality and how white feminists need to listen to the lived experiences of women of color, but then everyone went back to what they really cared about. For that matter the SJW community has been a breeding ground for new identities to champion, like non-binary, demisexuals, otherkin, and plurals, with non-binary being the main one to get traction outside of a handful of sites like Tumblr. The SJW memeplex has relatively little to do with the specifics of the groups it claims to champion, making it quite mutable.

That doesn't make the anti-whiteness any less real, race-based prioritization of the COVID-19 vaccine alone killed tens or hundreds of thousands of white people. Even if future SJWs refocus on plurals or something, it is likely that without sufficient pushback captured organizations like the CDC will continue quietly making decisions like that about race. But don't assume they're dependent on any particular identity group or expect them to remain the same while you try to position yourself against them.

I was going to point out that people who got the vaccine were older and had a higher base death rate than those who didn't, so there is selection bias in any comparison. But then I actually clicked your link, and it's way dumber than that! It isn't comparing to people who didn't get the vaccine, it's comparing VAERS reports by length of time since vaccination. Whether to make a VAERS report is an arbitrary decision, and obviously doctors will be more likely to do it the closer to vaccination it happened. If someone has a heart-attack a few hours after being vaccinated there will almost certainly be a VAERS report, if someone has a heart-attack months after being vaccinated there probably won't be, and that is true even if the risk of heart attack on day 0 and day 90 is exactly the same.

Then the argument moves to, well isn't puberty blockers irrecoverable harm to the child because of sterilization just like cutting off an arm? I'd say no, the issue isn't the loss of tissue it's the loss of capabilities.

There is good reason to believe that puberty blockers permanently hinder brain development, which hormones during puberty play an important role in. Unfortunately there are zero randomized control trials examining this, and even less evidence regarding using them to prevent puberty entirely rather than to delay precocious puberty a few years, but they have that effect in animal trials:

A reduction in long-term spatial memory persists after discontinuation of peripubertal GnRH agonist treatment in sheep

The long-term spatial memory performance of GnRHa-Recovery rams remained reduced (P < 0.05, 1.5-fold slower) after discontinuation of GnRHa, compared to Controls. This result suggests that the time at which puberty normally occurs may represent a critical period of hippocampal plasticity. Perturbing normal hippocampal formation in this peripubertal period may also have long lasting effects on other brain areas and aspects of cognitive function.

That study also cites this study in humans which found a 3-year course of puberty blockers to treat precocious puberty was associated with a 7% reduction in IQ, but since it doesn't have a control group I wouldn't put much weight on it.

Similar concerns were mentioned by the NHS's independent review:

A further concern is that adolescent sex hormone surges may trigger the opening of a critical period for experience-dependent rewiring of neural circuits underlying executive function (i.e. maturation of the part of the brain concerned with planning, decision making and judgement). If this is the case, brain maturation may be temporarily or permanently disrupted by puberty blockers, which could have significant impact on the ability to make complex risk-laden decisions, as well as possible longer-term neuropsychological consequences. To date, there has been very limited research on the short-, medium- or longer-term impact of puberty blockers on neurocognitive development.

Some people are asking whether people who accepted FTX money should have “seen the red flags” or “done more due diligence”.

I find this stuff really obnoxious. Since when has it ever been the job of charities to investigate the businesses of the people donating them money? EA or not, what charity does this? It would be a ridiculous waste of time and money, it's not their job and specialization exists for a reason. People are talking like it's some deep failing that they didn't find him suspicious and refuse his money, but just how many legitimate donors should they be willing to refuse as "suspicious" for the sake of avoiding a criminal? Not that it would have been practical anyway, EA-endorsed charities are not some unified group and a lot of his "EA" donations were stuff like directly supporting political candidates who promised to do something about pandemic preparedness

We're not talking about Sequoia Capital, the venture-capital firm that has now written down $214 million in FTX equity, had access to internal information, and actually had a duty to their investors to try to avoid this sort of thing. Similarly we're not talking about their other institutional investors like Blackrock, the Ontario Teacher's Pension Plan, Tiger Global Management, Softbank Group, Lightspeed Venture Partners, and Temasek. We're not talking about the state of Miami selling them the naming rights to a stadium for $125 million dollars, giving them a lot more advertising than some blog posts saying "this billionaire supports EA, great!". Somehow EA is held to a much higher standard than any of these, even though it seems obvious to me that accepting donations should be held to dramatically lower standards than investing teacher's retirement money. EA should focus on effective charity, that is already a sufficiently-ambitious specialty, it shouldn't focus on doing unpaid amateur investment analysis trying to beat institutional investors at their own jobs for the sake of refusing donations that might turn out to be from a criminal.

The established institutions of our society — government, academia, media, NGOs, etc. — are filled top-to-bottom with true believers who hold this as a terminal value, and it’s not going away until they all do (which is a problem, because there’s no voting them out).

This isn't how ideological groups work. They do not hold power by being all fanatics who would support the same policies regardless of their factual beliefs about the world. Nor do they have unlimited ability to hold onto power if public opinion turns strongly against them. There are some people like that, but they rely on support from the much larger numbers of people who buy into mainstream "anti-racist" arguments premised on factually incorrect beliefs. Most supporters of any ideology are aligned with it by some mixture of traits like factual beliefs, trusted information sources, formal principles, and informal biases. Many of them can be persuaded by chipping away at their factual beliefs and their trust in their current sources of information. If mainstreaming HBD failed it would be because the vast majority continued denying it, not because people accepted it and then just shrugged. Affirmative action doesn't have majority support already, it hangs on through disproportionate elite support, but that doesn't mean it can continue to do so even if you persuade a large chunk of public/elite supporters.

Compare to libertarians. In theory principle-based libertarians shouldn't even care how effective libertarianism is, right? The justifications are stuff like Freedom and the Non-aggression-principle, not effectiveness. But of course it's not a coincidence that they generally believe libertarianism is effective as well. There's presumably some libertarians who would, for instance, oppose conscription even if they sincerely believed it was the only way to prevent being conquered by a communist nation, or support open-borders even if they thought it would result in statists taking power or otherwise end in disaster. But most wouldn't, and in fact I've noticed a notable number of libertarians and ex-libertarians online who became alienated from hardline libertarianism based on stuff like believing that open-borders would end disastrously for liberty. And once you get into actually trying to set government policy alongside people who don't care about principled libertarianism, of course "Privatizing X will end terribly for everyone, but we should do it anyways because Freedom" isn't an argument anyone makes.

I do not think you are being nearly skeptical enough towards the account, not just regarding the possibility of deliberate lies but regarding how distorted memories can get regarding emotionally-charged events from 20 years prior. Have you ever had the experience of someone telling you a story regarding a grudge repeatedly over the course of years, and noticing it increasingly differ from your own recollection of the original story until you're pretty sure it's complete fiction? The way that, for instance, "X said Y, I bet that means he was thinking Z" becomes "X said Z"? (And then sometimes, upon further rumination, "X said Z, I bet that means he was thinking A" becomes "X said A".) If you haven't, trust me when I say it happens. Records like emails can tell you the actual contents of the email if you assume they weren't fabricated, but a lot rests not on them but on the context of the narrative surrounding them.

misrepresenting himself as having a "Don't ask, don't tell" arrangement with his wife

As an example, this is a description of the arrangement between two other people 20 years ago. It could easily mean that, for instance, she had agreed to the arrangement but exhibited some amount of jealousy, or something Dawn interpreted as jealousy.

lying to affair partners about having multiple simultaneous affairs

Meanwhile this could easily mean "had sex with me without mentioning that he had already had sex with someone else".

Or take this from the excerpt you tweeted:

From 2002 through 2020, all of Singer's female co-authors were women with whom Singer had been sexually involved, or to who he had made clear his sexual interest.

How the hell does she claim to know this about all his female co-authors for almost 20 years after their supposed relationship? For reference here are his publications. This made me curious enough to download the original complaint, but there's no elaboration or evidence provided that I can see. The language of "made clear" is of course great material for distorted interpretations and memories, all sorts of meanings become "clear" when you're nursing a grievance for 20 years.

That effect is strongly subject to genetic confounding, most child-abuse victims inherit the genes of a child abuser or at least the relative of a child abuser. Neither of your links take that into account, so they don't tell us anything about causation. I don't know if there's any studies on the "cycle of abuse" which account for genetic confounding, but here is one on a related factor:

The Origins of Cognitive Deficits in Victimized Children: Implications for Neuroscientists and Clinicians

Individuals exposed to childhood victimization had pervasive impairments in clinically relevant cognitive functions, including general intelligence, executive function, processing speed, memory, perceptual reasoning, and verbal comprehension in adolescence and adulthood. However, the observed cognitive deficits in victimized individuals were largely explained by cognitive deficits that predated childhood victimization and by confounding genetic and environmental risks.

If Russia invaded Alaska and said "if you shoot back at our soldiers we will launch nuclear weapons", letting them conquer Alaska would be better than a nuclear exchange. Nonetheless the U.S. considers "don't invade U.S. territory" a red line that they are willing to go to war with a nuclear power to protect. The proposal would be to establish the hypothetical anti-AI treaty as another important red line, hoping that the possibility of nuclear escalation remains in the background as a deterrent without ever manifesting. The risk from AI development doesn't have to be worse than nuclear war, it just has to be worse than the risk of setting an additional red line that might escalate to nuclear war. The real case against it is that superhuman AI is also a potentially beneficial technolgy (everyone on Earth is already facing death from old-age after all, not to mention non-AI existential risks), if it was purely destructive then aggressively pursuing an international agreement against developing it would make sense for even relatively low percentage risks.