@sodiummuffin's banner p

sodiummuffin


				

				

				
2 followers   follows 0 users  
joined 2022 September 05 03:26:09 UTC

				

User ID: 420

sodiummuffin


				
				
				

				
2 followers   follows 0 users   joined 2022 September 05 03:26:09 UTC

					

No bio...


					

User ID: 420

Same as everywhere else, the people who made the decision are true believers who think this is a great idea for the Navy and/or for their moral/ideological goals.

As I see it, the military is probably the last place that would be under pressure to go woke - the Left hates it unconditionally and passionately anyway, it is impossible to "cancel" it in any meaningful way, you can not really orchestrate an ideological boycott against the military...

There's a weird tendency to personify institutions and act like principal-agent problems don't exist, like how people will tie themselves in knots trying to come up with explanations about how corporations with SJW institutional capture are actually profit-maximizing. Why would someone with a Navy recruitment job care more about "doing the best job possible to slightly improve Navy recruitment numbers" than "making the world a safe place for LGBTQ+ people"? Even more importantly, why wouldn't someone with such an ideology sincerely believe that he can do both? People are biased about the merits of their ideology in other circumstances, they don't turn that off when they're making decisions on behalf of an institution. They can tell themselves something like "This will boost recruitment by showing the Navy is an inclusive place for young people, anyone bigoted enough to object is an asshole who would cause problems anyway." and believe it.

More developments on the AI front:

Big Yud steps up his game, not to be outshined by the Basilisk Man.

It is a video from over two months ago in which he hyperbolically describes how implausible he thinks it is that the world imposes strict international regulations on AI development. It is not a new development because someone on Twitter decided to clip it. He mentions nuclear weapons to illustrate how enforcing a treaty against a nuclear power is a hard problem. Of course, in reality if one side considered it a high priority it is very likely an agreement could be found before escalating to that point, same as the existing agreements between nuclear powers. There isn't going to be a treaty banning AI development because not even the U.S. wants one, in part because the outcome of developing superhuman AI is so uncertain and controversial, not because "bright line that we will risk nuclear exchange to prevent you crossing" is something unimaginable in international relations.

If Russia invaded Alaska and said "if you shoot back at our soldiers we will launch nuclear weapons", letting them conquer Alaska would be better than a nuclear exchange. Nonetheless the U.S. considers "don't invade U.S. territory" a red line that they are willing to go to war with a nuclear power to protect. The proposal would be to establish the hypothetical anti-AI treaty as another important red line, hoping that the possibility of nuclear escalation remains in the background as a deterrent without ever manifesting. The risk from AI development doesn't have to be worse than nuclear war, it just has to be worse than the risk of setting an additional red line that might escalate to nuclear war. The real case against it is that superhuman AI is also a potentially beneficial technolgy (everyone on Earth is already facing death from old-age after all, not to mention non-AI existential risks), if it was purely destructive then aggressively pursuing an international agreement against developing it would make sense for even relatively low percentage risks.

Both. Mostly I was contrasting to the obverse case against it, that risking nuclear escalation would be unthinkable even if it was a purely harmful doomsday device. If it was an atmosphere-ignition bomb being developed for deterrence purposes that people thought had a relevant chance of going off by accident during development (even if it was only a 1% risk), then aggressively demanding an international ban would be the obvious move even though it would carry some small risk of escalating to nuclear war. The common knowledge about the straightforward upside of such a ban would also make it much more politically viable, making it more worthwhile to pursue a ban rather than focusing on trying to prevent accidental ignition during development. Also, unlike ASI, developing the bomb would not help you prevent others from causing accidental or intentional atmospheric ignition.

That said, I do think that is the main reason that pursuing an AI ban would be bad even if it was politically possible. In terms of existential risk I have not read The Precipice and am certainly not any kind of expert, but I am dubious about the idea that delaying for decades or centuries attempting to preserve the unstable status-quo would decrease rather than increase long-term existential risk. The main risk I was thinking about (besides "someone more reckless develops ASI first") was the collapse of current civilization reducing humanity's population and industrial/technological capabilities until it is more vulnerable to additional shocks. Those additional shocks, whether over a short period of time from the original disaster or over a long period against a population that has failed to regain current capabilities (perhaps because we have already used the low-hanging fruit of resources like fossil fuels) could then reduce it to the point that it is vulnerable to extinction. An obvious risk for the initial collapse would be nuclear war, but could also be something more complicated like dysfunctional institutions failing to find alternatives to depleted phosphorous reserves before massive fertilizer shortages. Humanity itself isn't stable, it is currently slowly losing intelligence and health to both outright dysgenic selection from our current society and to lower infant mortality reducing purifying selection, so the humans confronting future threats may well be less capable than we are. Once humans are reduced to subsistence agriculture again the obvious candidate to take them the rest of the way would be climate shocks, as have greatly reduced the human population in the past.

Furthermore, I'm not that sympathetic to Total Utilitarianism as opposed to something like Average Preference Utilitarianism, I value the preferences of those who do or will exist but not purely hypothetical people who will never exist. If given a choice between saving someone's life and increasing the number of people who will be born by 2, I strongly favor the former because his desire to remain alive is real and their desire to be born is an imaginary feature of hypothetical people. But without sufficient medical development every one of those real people will soon die. Now, wiping out humanity is still worse than letting everyone die of old age, both because it means they die sooner and because most of those people have a preference that humanity continue existing. But I weigh that as the preferences of 8 billion people that humanity should continue, 8 billion people who also don't want to die themselves, not the preferences of 10^46 hypothetical people per century after galactic colonization (per Bostrom's Astronomical Waste) who want to be born.

people who do not and who never have existed can't be said to have "wants" in any meaningful sense

You should include people who will exist as well, as opposed to people who could potentially exist if you took other actions but will never actually exist. Otherwise something like "burying a deadly poison that you know will leach into the water table in 120 years" would be perfectly moral, since the people it will kill don't exist yet.

This kind of idiotic one-dimensional thinking is why I maintain that utilitarianism is fundementally stupid, evil, and incompatible with human flourishing.

As I mentioned, Preference Utilitarianism and Average Preference Utilitarianism are also forms of utilitarianism. And Total Utilitarianism doesn't imply wireheading either. Wireheading is only an implication of particularly literal and naive forms of hedonic utilitarianism that not even actual historical hedonic utilitarians would endorse, they would presumably either claim it isn't "real" happiness or switch to another form of utilitarianism.

Honestly, I think the main rhetorical advantage of non-utilitarianism forms of ethics is that they tend to be so incoherent that it is harder to accuse them of endorsing anything in particular. But people being bad at formalizing morality doesn't mean they actually endorse their misformalization's implications. You just tried to express your own non-utilitarian beliefs and immediately endorsed sufficiently-delayed murders of people who aren't born yet, that doesn't mean you actually support that implication. But having non-formalized morality is no advantage in real life and often leads to terrible decisions by people who have never rigorously thought about what they're doing, because you really do have to make choices. In medicine utilitarianism gave us QALYs while non-consequentialism gave us restrictive IRBs that care more about the slightest "injustice" than about saving thousands of lives, as a human who will require medical care I know which of those I prefer.

omnicide

The view he is expressing is of course the opposite of this - that humanity surviving until it ultimately colonizes the galaxy is so important that anything that improves humanity's safety is more important than non-omnicidal dangers. Of course that would still leave a lot of uncertainty about what the safest path is. As I argued, significant delays are not necessarily more safe.

My 1e999999999999999 hypothetical future descendants who see utilitarian AIs as abominations to be purged with holy fire in the name of the God-Emperor are just as real as your "10^46 hypothetical people per century after galactic colonization" and thier preferences are just as valid.

To be clear the "preference" framing is mine, since I prefer preference utilitarianism. Bostrom would frame it as something like trying to maximize the amount of things we value, such as "sentient beings living worthwhile lives".

Off-world colonies don't need AI, and self-sustaining ones take an absolute sledgehammer to every X-risk except AI and dystopia (and aliens and God, but they're hardly immediate concerns). Dumb incentives for bio research can be fixed (and physics research, if and when we get to that). Dysgenics yields to PGT-P and sperm donors (although eugenics has some issues of its own).

Sure, but of course such measures being possible doesn't mean they'll actually be done.

Hell, even GOFAI research or uploads aren't likely to take much over a century, and would be a hell of a lot safer than playing with neural nets

This seems like too much certainty about the nature and difficulty of the task, which in turn influences whether significant delay actually increases the odds of success. For instance, if we turn out to live in a universe where superhuman AI safety isn't that hard, then the important thing is probably that it be done by a team that considers it a serious concern at all. Right now the leading AI company is run by people who are very concerned with AI alignment and who founded the company with that in mind, if we ban AI development and then the ban gets abandoned in 30 years there's a good chance that won't be the case again.

A candidate for such a universe would be if it's viable to make superintelligent Tool AIs. Like if GPT-10 can mechanistically output superhuman scientific papers but still doesn't have goals of its own. Such an AI would still be dangerous and you certainly couldn't release it to the general public, but you could carefully prompt it for papers suggesting more resilient AI alignment solutions. Some have argued Agent AIs would have advantages compared to Tool AIs, like Gwern arguing Tool AIs would be "less intelligent, efficient, and economically valuable". Lets say we live in a future where more advanced versions of GPT get routinely hooked up to other components like AgentGPT to carry out tasks, something which makes it significantly better at complicated tasks. OpenAI just developed GPT-10 which might be capable of superhuman scientific research. They can immediately hook it up to AgentGPT+ and make trillions of dollars while curing cancer, or they can spend 2 years tweaking it until it can perform superhuman scientific research without agentic components. It seems plausible that OpenAI would take the harder but safer route, but our 2050s AI company very well might not bother. Especially if the researchers, having successfully gotten rid of the ban, view AI alignment people the same way anti-nuclear-power environmentalists and anti-GMO activists are viewed by those respective fields.

Regarding talk of 100-year bans on AI while people steadily work on supposedly safer methods, I'm reminded of how 40 years ago overpopulation was a big mainstream concern among intellectuals. These ideas influenced government policy, most famously China's One Child policy. Today the fertility rate is substantially reduced (though mostly not by the anti-overpopulation activists), the population is predictably aging, and...the plan is completely abandoned, even though that was the entirely predictable result of dropping fertility. Nowadays if a country is concerned with ferility either way it'll want it to increase rather than decrease. Likewise the eugenics movement had ambitions of operating across many generations before being erased by the tides of history. In general, expecting your movement/ideas to retain power that long seems risky seems very risky.

Why do you think this has anything to do with utilitarianism? Utilitarianism doesn't value the lives and well-being of mass-murderers any less than it values anyone else. It only recommends harming them as an instrumental goal to serve a more important purpose, such as saving the lives of others. A 20-year-old who raped and killed a dozen children still has plenty of potential QALYs to maximize, even adjusting his life-quality downward to account for being in prison. It's expensive but governments spends plenty of money on things with lower QALY returns than keeping prisoners alive. Also OP only differs from conventional death-penalty advocacy in that he seems concerned with the prisoners consenting, proposing incentivizing suicide instead of just executing them normally, and once again that is not something utilitarianism is particularly concerned with except in instrumental terms.

The utilitarian approach would be to estimate the deterrent and removal-from-public effect of execution/suicide-incentivization/life-in-prison/etc. and then act accordingly to maximize the net welfare of both criminals and their potential victims. It doesn't terminally value punishing evil people like much of the population does, though I think rule-utilitarianism would recommend such punishment as a good guideline for when it's difficult to estimate the total consequences. (In Scott's own Unsong the opposition of utilitarians to the existence of Hell is a plot point, reflecting how utilitarianism doesn't share the common tendency towards valuing punishment as a terminal goal.) But neither is utilitarianism like BLM in that it cares more about a couple dozen unarmed black people getting shot in conflicts with police than about thousands of additional murder victims and fatal traffic accidents per year from a pullback in proactive policing. That's just classic trolley-problem material: if one policy causes a dozen deaths at the hands of law-enforcement, and the other policy causes thousands of deaths but they're "not your fault", then it's still your responsibility to make the choice with the best overall consequences. There are of course secondary consequences to consider like the effect on police PR affecting cooperation with police, but once you're paying attention to the numbers I think it's very difficult to argue that they change the balance, especially when PR is driven more by media narratives than whether the number is 12 or 25 annually.

Notably, when utilitarians have erred regarding prisoners it seems to have been in the exact opposite direction you're concerned about. A while back someone here linked a critical analysis of an EA organization's criminal-justice-reform funding. They were primarily concerned with the welfare of the criminals rather than with secondary effects like the crime rate. The effect on the welfare of the criminals is easier to estimate, after all, an easy mistake reflecting the importance of utilitarians avoiding the streetlight effect. It was also grossly inefficient compared to other EA causes like third-world health interventions. They did end up jettisoning it (by spinning it off into an independent organization without Open Philanthropy funding), but not before spending $200 million dollars including $50 million on seed funding for the new organization. However, I think a lot of that can be blamed on the influence of social-justice politics rather than on utilitarian philosophy, and at least they ultimately ended up getting rid of it. (How many other organizations blowing money on "criminal justice reform" that turns out to be ineffective or harmful have done the same?). In any case, they hardly seem like they're about to start advocating for OP's proposal.

For instance, women seem more able to put themselves in the shoes of male protagonists in fiction, while men generally seem uninterested in female protagonists.

In anime and manga there are entire genres, most obviously slice-of-life comedies, where it is typical to have nearly 100% female casts (and a 50% or higher male audience). Female characters are a publishing requirement at plenty of manga magazines, and not for ideological reasons. Here is a relevant extra from the comedy manga/anime D-Frag, which ended up with a main cast that looks like this. The same is true for anime-style videogames, in particular gacha games which have an emphasis on character design. Even aside from the subsets of Japanese/Japanese-inspired media doing their best to tile the universe with cute girls, plenty of stories from times and places unconcerned with feminism have gone out of their way to incorporate female characters into roles like "warrior" which would realistically be all male, from ancient myths to modern fantasy.

If a subset of modern western characters like the female Captain Marvel aren't appealing to men, perhaps it is because none of the people involved with creating them designed them to be. That doesn't mean they can't be "strong" or whatever, female anime/manga characters are varied and include those with nearly every kind of "strength" imaginable, both the kinds of strength primarily associated with men and the kinds that aren't. But it does mean they shouldn't be designed by people who view "making a strong female character" or "making sure not to incorporate misogynistic tropes" as primary goals in character writing, which often takes precedence over concerns like making the character likable or interesting. Indeed, most of those strong female anime/manga characters were written by people who have probably never encountered a phrase like "strong female character" in their lives, let alone having them as important categories shaping how they think about writing fiction.

Missing a not, anyway this is just a made up principle. Annexation is beyond the pale but we can bomb various countries into anarchy, set up puppet governments in them, divide countries into smaller parts, place troops in countries without their permission?

Yes. Wars of annexation materially strengthen aggressors and incentivize further war, they are a sort of geopolitical positive feedback loop. In the modern era going to war makes you weaker and poorer, less capable of waging war rather than more. Sometimes countries are willing to do it anyway, and of course there is gaming of the boundaries, but keeping the feedback loop negative rather than positive helps stop this getting too out of hand. How harmful (or beneficial) the war is to the country being invaded isn't really relevant to that, the important thing is that it be harmful to the aggressor. For instance the invasion of Iraq imposed a cost rather than a benefit on the U.S. (as even most of its proponents knew it would) so it didn't result in a series of more U.S. invasions, but the Russian invasion of Crimea was sufficiently beneficial that it chain-reacted into the invasion of the rest of Ukraine.

Wars must have no winners, only losers, and to ensure this continues to remain the case countries are willing to take losses themselves so that attempted wars of annexation leave the aggressor indisputably worse-off. Complaining that countries are "irrationally" willing to harm themselves for the sake of deterrence is deeply silly, it's basic game-theory and central to the logic of modern war. If Russia thought countries wouldn't really be willing to harm themselves for no benefit besides vague principles of game-theoretic value, that's just another way that Russia's status as a low-trust society has made them weaker.

"Anti-woke" includes many things that are beneficial to black people, most obviously in that it opposes wokeness in areas that have nothing to do with race, but also even within the realm of race. For instance, consider the CDC's COVID-19 vaccine prioritization policy. They deprioritized older people relative to essential workers because older people are more white, even though they estimated this would result in many additional deaths (especially if the vaccine was less effective at preventing infection than serious disease, which turned out to be the case). This policy killed more black people it just killed even more white people so the proportion of the deaths was more white. How did it benefit black people that more of them died so that more white people would die so that the percentages looked better to woke ACIP/CDC officials? Take the argument from the expert on ethics and health-policy the NYT quoted:

“Older populations are whiter,” Dr. Schmidt said. “Society is structured in a way that enables them to live longer. Instead of giving additional health benefits to those who already had more of them, we can start to level the playing field a bit.”

I don't think the average black person would really be sympathetic to this argument, even before you pointed out it was also going to kill more black people. These sorts of arguments are mostly only appealing to the woke. And of course the same is true for plenty of less life-or-death issues, like Gamergate's NotYourShield consisting of women and minorities who didn't think they benefited from journalists defending themselves by accusing critics of being sexist/racist/etc.

Furthermore, even within the limited realm of affirmative-action I don't think wokeness genuinely serves the racial self-interest of black people. There are many more black people who benefit from infrastructure than from racial quotas in infrastructure contracts, more who need medical care than who go to medical school, more who use Google than who work for Google. It isn't just the principles that want the black percentage to be high vs. the ones that want it to be low, there is an inherent asymmetry because meritocracy isn't just an arbitrary "principled libertarian stance", it serves an important functional purpose.

Of course diversity advocates also sometimes say that affirmative-action/etc. benefits everyone, it's just that they're wrong. Other times racial resentment and malice clearly play a role, but even then that doesn't mean it actually serves racial self-interest. In general I think ideological conflicts have a lot more true believers and a lot less people cynically pursuing their interests than people tend to think they have.

The proponents were saying 'let's get rid of Saddam it'll be easy and stabilize the Middle East, spread democracy, make new allies...'.

Helping Iraqis and the Middle East doesn't significantly materially strengthen the U.S., it's expending U.S. resources and power for the sake of charity. This is inherently self-limiting, the U.S. has resources to waste on things like this but in the end it is left with less capability to wage war than it started with. Having Iraq as an ally or vassal was never going to be valuable enough to be worth a war, even if it was as easy as proponents thought it would be, and proponents of the war instead justified the war in terms of humanitarian (Saddam, democracy) or threat-reduction (WMDs) concerns. And the U.S. didn't even really turn Iraq into a vassal, it's a democracy that has been at times vocally critical of the U.S. and there is no guarantee that U.S./Iraq relations won't worsen further in the future. It would have been far easier to turn it into an ally in some other way, like buddying up to Saddam or replacing him with some other dictator. Proponents of the Iraq war didn't say they would turn Iraq into a vassal, they said they would turn it into a democracy, and that is indeed what they did. It was the opponents of the Iraq war who said the U.S. would materially benefit, the "No blood for oil" people, but that was never remotely realistic and the proponents didn't say it was.

Is that your real objection? If it was instead a serial killer who you believe doesn't have any particularly inaccurate beliefs about his victims, but simply enjoys killing people and has been hunting the person you're hiding as his next target, would you tell him the truth or would you come up with a different excuse for why it's acceptable to lie?

It seems like probably the real reason you don't tell the truth is simply that if you do it'll result in someone's death and no real gain, just adherence to the "don't lie" rule. But if that's your reason then just say that's your reason, rather than obscuring it behind excuses specific to the situation.

He is likely referring to this from pages 11-12 of the GPT whitepaper:

GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, after the post-training process, the calibration is reduced (Figure 8).

In any case, the articles you quote are oversimplified and inaccurate. Predicting text (and then satisfying RLHF) is how it was trained, but the way it evolved to best satisfy that training regime is a bunch of incomprehensible weights that clearly have some sort of general reasoning capability buried in there. You don't need to do statistical tests of its calibration to see that, because something that was truly just doing statistical prediction of text without having developed reasoning or a world-model to help with that task wouldn't be able to do even the most basic reasoning like this unless is already appeared in the text it was trained on.

It's like saying "humans can't reason, they're only maximizing the spread of their genes". Yes, if you aren't familiar with the behavior of LLMs/humans understanding what they evolved to do is important to understanding that behavior. It's better than naively assuming that they're just truth-generators. If you wanted to prove that humans don't reason you could point out all sorts of cognitive flaws and shortcuts with obvious evolutionary origins and say "look, it's just statistically approximating what causes more gene propagation". Humans will be scared of things like spiders even if they know they're harmless because they evolved to reproduce, not to reason perfectly, like a LLM failing at Idiot's Monty Hall because it evolved to predict text and similar text showed up a lot. (For that matter humans make errors based on pattern-matching ideas to something they're familiar with all the time, even without it being a deeply-buried instinct.) But the capability to reason is much more efficient than trying to memorize every situation that might come up, for both the tasks "predict text and satisfy RLHF" and "reproduce in the ancestral environment", and so they can do that too. They obviously can't reason at the level of a human, and I'd guess that getting there will involve designing something more complicated than just scaling up GPT-4, but they can reason.

Despite being based on GPT-4 Bing is apparently well-known for performing dramatically worse. There have been some complaints of GPT-4's performance degrading too, presumably due to some combination of OpenAI trying to make it cheaper to run (with model quantization?) and adding more fine-tuning trying to stop people from getting it to say offensive things, but hopefully not to the extent that it would consistently fail that sort of world-modeling. (If anyone with a subscription wants to also test older versions of GPT-4 it sounds like they're still accessible in Playground?)

I don't think it's plausible that all the examples of GPT-4 doing that sort of thing are faked, not when anyone shelling out the $20 can try it themselves. And people use it for things like programming, you can't do that without reasoning, just a less familiar form of reasoning than the example I gave.

My problem is, while I'm sure that not all the examples of GPT-4 seeming to get complex reasoning tasks are fake, if they cannot be replicated, what good are they?

I am saying they can be replicated, just by someone who unlike you or me has paid the $20. I suppose it is possible that the supposed degradation in its capabilities has messed up these sorts of questions as well, but probably not.

If GPT-4's ability to "reason" is ephemeral and seemingly random, is it really reasoning, or is it just occasionally getting lucky at ordering abstract tokens for it's monkey overlords?

There is a big difference between random guessing and having a capability that sometimes doesn't work. In particular, if the chance of randomly getting the right result without understanding is low enough. Text generators based on Markov chains could output something that looked like programming, but they did not output working programs, because such an outcome is unlikely enough that creating a novel program is not something you can just randomly stumble upon without some idea of what you're doing. In any case, as far as I know GPT-4 is not that unreliable, especially once you find the prompts that work for the task you want.

Which makes sense to me, because a lot of those tests involve more generalized and flexible reasoning than the sorts of formalized mathematical logic examples it might plausibly be trained to imitate.

How well it reasons is a different question from whether it reasons at all. It is by human standards very imbalanced in how much it knows vs. how well it reasons, so yes people who think it is human-level are generally being fooled by its greater knowledge. But the reasoning is there and it's what makes a lot of the rest possible. Give it a programming task and most of what it does might be copying common methods of doing things that it came across in training, but without the capability to reason it would have no idea of how to determine what methods to use and fit them together without memorizing the exact same task from elsewhere. So practical use is generally going to involve a lot of memorized material, but anyone with a subscription can come up with novel questions to test its reasoning capabilities alone.

Scientific American: Coal Ash Is More Radioactive Than Nuclear Waste

the study i linked found that for the energy generated, more radiation is given out by fly ash, which contains trace amounts of uranium and thorium. while the amount of radiation that makes it into people from both of these sources isn't dangerous, it's worth pointing out when given the concerns of "gonna be irradiated."

The title of that article is laughably false. The underlying point it is based on, that under normal operation a nuclear plant releases less radioactive material into the environment than a coal plant, is technically true but grossly misleading. Under normal operation nuclear plants release essentially no radioactive material, the radioactivity concern is concentrated purely into the possibility of something going wrong. Sanity-check I did after encountering this argument a decade ago:

The EPA gives the radioactivity of average fly ash as 5.8 picocuries per gram, and the U.S. produces around 125 million tons of coal-combustion byproducts per year as of 2006. If we overestimate and assume all coal-combustion byproducts are the more-radioactive fly ash, that comes to around 658 curies worth of material per year. By comparison, a year after the Fukushima disaster TEPCO estimated total radiation releases as 538,100 terabecquerels - equivalent to 14,543,243 curies. Note that this assumes all fly ash is being released into the environment when modern first-world plants safely capture most of it. So one year after the Fukushima disaster it had already released more radiation than 22,000 years of 2006-era U.S. coal radiation emissions, under very pessimistic assumptions. Which means we can confidently estimate Fukushima has released far more radiation than all the coal burned in human history and all the coal remaining in the ground that could be burned combined.

This doesn't mean that nuclear power is overall a bad idea, but it's definitely not because coal is a worse radioactivity concern. From what I've heard this particular misleading talking point has been going around even before it started circulating on the internet, I remember someone telling me that it was going around Stanford decades ago. People should be cautious with counterintuitive factoids like this, because often they spread because they are too good to check.

I addressed this in the footnote.

But it's not true that "for the energy generated, more radiation is given out by fly ash". You didn't say "so long as nothing goes wrong", so the average amount of radiation released per energy produced includes the risk of disaster. And since nuclear power plants involve significantly radioactive material and coal plants don't, even a tiny risk is enough to push the average way above coal plants. The fact that Fukushima alone released more radioactivity than the fly ash we would get from burning all coal deposits on Earth makes this clear.

It is a quite common myth that living near a nuclear power plant emits radiation during ongoing operations.

Then just say "nuclear power plants release virtually no radiation under normal operation". Don't try to make it sound like nuclear beats coal in terms of radiation, on a technicality sufficiently narrow that both you and the Scientific American article you link (and the people I've seen bring up this talking point before) stumble into outright falsehood. Nuclear beats coal on plenty of metrics, there is no need to compare them in terms of radioactivity besides the appeal of being counterintuitive.

Wouldn’t the rarity of the catastrophic failure matter as well?

Which is why you do enough math to sanity-check the comparison. As I mentioned, Fukushima released more radioactivity than would be released by burning all the coal deposits on Earth. Nuclear power plants involve relevant amounts of radioactivity, coal plants don't. The fact that a release like Fukushima happened even once implies the odds aren't low enough to overcome the massive difference in radioactivity. Nuclear has plenty of advantages, and the risk of catastrophic failure is low enough that those other advantages might easily outweigh it, but being less of a radiation risk than coal is not one of them.

Subliminal messaging doesn't work, ideological messaging does. Both the "look at Falwell saying crazy stuff about Tinky Winky" rhetoric and to a much lesser extent the "look at Tinky Winky being a gay icon" rhetoric presumably contributed to strengthening the social-justice ideological framework in which homosexuality is high-status, leading people to identify as gay and then sometimes even have gay sex. But there's no reason to believe the character himself did, because whether his supposed gay associations were intentional or not (probably not) the vast majority of people looking hard enough to see it already had strong ideological views on the subject. The existence of a character like that does nothing to strengthen those views, while a news story about how one of the enemy is stupid does. Same way crossdressing stories like Mulan aren't what caused the massive surge in transgenderism. Or antifa people attacking people at conservative protests and claiming to be inspired by Captain America or historical WW2 veterans - what inspired them is the antifa memeplex itself.

It is fundamentally missing the point of the recent surge in social-justice "identities", because for the most part it isn't even about the actual features of those groups, it is about the ideology itself. Thus the popularity of things like "grey-asexual" identities that let you be asexual while having sex or "non-binary" identities that let you be transgender without transitioning. That doesn't mean the surge in those identifications isn't connected to behavior, there really are a lot more people having gay sex even if they're a smaller percentage of those identifying as gay. This increase is of course most dramatic with transgenderism, where it's looking like (contrary to the concept of gender identity) there isn't much stopping people from transitioning when their ideology and social circle pushes them towards it. But this transmits through the ideological memeplex, not fictional characters being vaguely non-masculine.

The binding force behind all "woke" modern movements is anti-whiteness.

A handful of years ago the most prominent SJW focus was feminism, by far. Race got some obligatory mumbling about intersectionality and how white feminists need to listen to the lived experiences of women of color, but then everyone went back to what they really cared about. For that matter the SJW community has been a breeding ground for new identities to champion, like non-binary, demisexuals, otherkin, and plurals, with non-binary being the main one to get traction outside of a handful of sites like Tumblr. The SJW memeplex has relatively little to do with the specifics of the groups it claims to champion, making it quite mutable.

That doesn't make the anti-whiteness any less real, race-based prioritization of the COVID-19 vaccine alone killed tens or hundreds of thousands of white people. Even if future SJWs refocus on plurals or something, it is likely that without sufficient pushback captured organizations like the CDC will continue quietly making decisions like that about race. But don't assume they're dependent on any particular identity group or expect them to remain the same while you try to position yourself against them.

No, I meant to reply to cake's OP comment.

sitting members of Congress - who are saying "yeah I've seen some of the evidence, and it's crazy, and there's something here we need to look into", then it makes explanations involving hallucinations and weather balloons less plausible.

It makes hallucinations much less plausible, but I don't think it really does much about misidentified balloons, glare, something on the camera lens, camera image-sharpening algorithms combining with those others, etc. See the videos in this Metabunk thread for examples. I don't think congressmen have any special skills to distinguish whether a fast-moving blob on an aircraft camera is a spacecraft or a visual artifact. And while people like pilots and intelligence agents might be better, it isn't really their area of expertise either. They're focused on dealing with real planes, not every weird visual artifact that can happen. On the scale of a country you can cherrypick enough things that coincidentally seem alien-like to be convincing to many people, including many government officials. But if it's ultimately all formed out of random noise you'll never get that definitive piece of evidence, just lots of "that blob was crazy and we couldn't figure out another explanation", which is the pattern we've seen.

You start out talking about not writing a political statement but then end up talking about how to write political propaganda that, unlike most political propaganda, isn't poorly-written or obnoxious. Those are different goals that involve going down diverging pathways. In particular, if you're going to spend time and effort thinking about this sort of thing, how about spending it thinking about the ideologies that exist within your fictional world? Not as an allegory, not as an insertion of current issues with or without commentary, but as part of the worldbuilding. And then instead of deciding ahead of time whether an ideology or political faction is "right" or "wrong" or "it's complicated" based on how it maps to the civil-rights movement or transgenderism or whatever, evaluate it (and let your audience evaluate it) on its own terms, as an outgrowth of relevant issues in the world you have created.

Jeff Vogel of Spiderweb Software talks about something similar:

I put a ton of politics into my games, but I write political philosophy, not comments on current events. My games are not about any one Big Issue Of The Day. They are about the base principles we have that help us make our own opinions about those issues.

Instead of looking to contemporary political controversies for your inspiration, you can try looking elsewhere. You can look to history, to political conflicts where every side and even the issues they consider important are likely to be one or both of "alien" or "timeless" to modern perspectives. Similarly you can look to old political philosophy. Or to fiction that is at least old enough to not be part of the current political zeitgeist. You can look to science and technology, to the sorts of things that societies could theoretically be doing if they had different values or structures. You can look at all the setting elements you have for other reasons, for game mechanics or because they're cool or because they're part of the genre or because you had to make some sort of map/factions/history, and seriously think through how people in that world would relate to them.

Think about questions like what views are functional, whether functional for society or the individual or for some subgroup. For a recent example imagine if, before the invention of AI art, you wrote a setting where AI art was possible. I think you probably could have predicted the backlash from some artists, on grounds like economic self-interest and their self-conception, and predicted a lot of the specific rhetoric. Or, if it was invented a while ago, there's other questions like what sort of economic role it ends up fitting into long-term. I don't think this would necessarily be the most compelling setting element, it probably wouldn't be central, but I think it would probably be more interesting than inserting either contemporary politics or a metaphor for them. Maybe some reviewer would interpret it as you criticizing real-world automation as stripping meaning from work, but I don't think it would benefit from you approaching the writing as a metaphor, except perhaps by using history as a reference for how these conflicts can play out.

You don't have to do this, not every work (especially videogames) needs to have ideologies and political conflicts invented for its worldbuilding. The Law of Conservation of Detail is a very real concern, though it can enhance even briefly-mentioned details if you've put more thought into them than the audience expects. But if you don't want to do this you probably shouldn't be wasting your time and the audience's attention-span on contemporary politics either. In that case just use the superficial details that seem to match your setting/genre/aesthetic and don't do anything more. It is unlikely anyone will care. Yes there have been cases like Kingdom Come: Deliverance (targeted by Tumblr psuedohistorian medievalpoc and then game journalists for not having "POC" in their piece of medieval europe) but there are too many games coming out for people to create controversies like that about a meaningful fraction. Especially if you're not dumb enough to respond on social media or release a statement/apology.

I'm trying to figure out how I would make either characters that are never called attention to, or characters that are an allegory . . . for trans people.

One reason transgenderism tends to be particularly badly written in fiction, particularly fiction not set in a western country in 2023, is because it entails an ideological framework that is highly specific and restricted to a particular place and time. People will write a medieval fantasy setting and give characters views popularized on the internet less than a decade ago. Even people who don't think they're writing fiction, like Wikipedia editors writing about historical women who disguised themselves as men, will try to fit it into the trans framework (sometimes resulting in the Wikipedia article having male pronouns). Historical eunuchs and the ideological viewpoints regarding them are more genuinely alien than "what if aliens had...4 genders" or "what if aliens were genderfluid shapeshifters", because neither eunuchs nor the viewpoints regarding them were based on contemporary ideas like gender identity to begin with.

The linked study is based on scoring higher on scales for "Hostility Towards Women", "Rape Myth Acceptance", and "Sexual Objectification". Reading the appendix, these scales are sufficiently low-quality that it is difficult to conclude much from them, at least not without the data for how people responded to individual questions.

Some of the 10 items on the "hostility towards women" scale include "I feel that many times women flirt with men just to tease them or hurt them.", "I am sure I get a raw deal from the women in my life. ", and "I usually find myself agreeing with women. (Reverse coded)". It doesn't really provide novel information to learn that someone romantically unsuccessful has worse experiences with women and is less likely to have someone like a wife in his life that he is more likely to agree with than if the women he interacts with are strangers. (It's also a bit funny to imagine someone making a "hostility towards men" scale and making one of the items "I usually find myself agreeing with men. (Reverse coded).")

Meanwhile large sections of "Rape Myths" and "Sexual Objectification" are things the now-successful Hanania would presumably agree with. Questions like that are going to pick up on very broad demographic correlations with ideology. The ideological bias on display also makes me more skeptical about the people conducting these studies. Examples of the 11 "Rape Myths" include "To get custody for their children, women often falsely accuse their ex-husband of a tendency toward sexual violence.", "Many women tend to exaggerate the problem of male violence." and "It is a biological necessity for men to release sexual pressure from time to time.". (The last would naturally correlate with high sex drive and thus sexual dissatisfaction.) Examples of the 10 "Sex Objectification" items include "Being with an attractive woman gives a man prestige.", "Using her body and looks is the best way for a woman to attract a man.", and "Sexually active girls are more attractive partners.".

Also some of these seem sufficiently unarguable that it seems like it might be heavily influenced by the respondents' social desirability bias. For instance, if many of the men disagreeing that "Being with an attractive woman gives a man prestige." or "Sexually active girls are more attractive partners." believe otherwise but are the type to answer surveys with what they perceive as the most socially desirable answers, are they also more likely to misrepresent how sexually satisfied they are? And the second one would also measure sex drive.