site banner

Culture War Roundup for the week of March 2, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

Contra sapce colonization

A couple arguments against space colonization, in order of how convincing they are to me. A lot of arguments in favor of space colonization like to make specious arguments based on the proposed similarity between the colonization of the Americas and Mars/Venus/Moons of Jupiter. While potentially highlighting psychologically similar explorer mindsets, I think these arguments completely miss the physical realities of space.

1. Ecology and Biology

The newest Tom Murphy post from DoTheMath has clarified what I believe to be a huge blindspot in the space colonization narrative that many on this forum: Ecology! Murphy's argument is that we've never successfully created a sealed, self-sustaining ecology that lasts for even anything close to a human lifespan. Biosphere 2 lasted for approximately 16 months, and the EcoSphere that Murphy uses as an example in this article lasts for about 10 years, but ultimately collapses because the shrimp fail to reproduce. Both of these "sealed" examples occur on Earth, shielded from radiation, and in moderate ambient temperatures. This will not be the case on Mars, nor on the 9 month journey to the Red Planet.

Even outside of sealed environments, island ecologies on Earth are notoriously unstable because of population bottlenecks that eliminate genetic diversity and make key species vulnerable to freak viruses or environmental disruption.

Of course a Mars colony won't be an ecological island, at least at first, because of constant shipments from Earth of supplies and genetic material (humans, bacteria, crops, etc.). But unless the colony can eventually become self-sustaining, I'm not sure what the point of "colonization" actually is. It's not clear that mammals can even reproduce in low gravity environments, and barring a large scale terraforming effort that would likely take millennia, any Mars colony will be a extraterrestrial version of Biosphere 2 without the built in radiation shielding and pleasant ambient temperature.

Constant immigration and resupply missions will also be incredibly challenging. 9 months in radiation-rich deep space in cramped, near solitary confinement is not something that is necessarily possible to endure for many humans. Every simulated Mars mission has ended with the participants at each others throats before arrival to the planet. Astronauts on the ISS, who receive relatively small doses of radiation compared to deep space, experience cancers at much higher rates, and probably damage their reproductive genetics significantly.

Contrast this to the colonization of the Americas. The initial colonists of both Massachusetts and Virginia were terribly unprepared for what was, at least compared to space, a relatively benign ecological context. There was clean air, water, shielding from radiation, and relatively plentiful food. Yet these colonies nearly died out in their first winter because of poor planning, and were only saved by the help of Native Americans. There are not Native Americans on Mars, no deer or wild berries to hunt in the woods if farming fails, or a supply ship is missed. Mars colonists won't be rugged frontiersmen, but extremely fragile dependents of techno-industrial society.

I'm not saying it's impossible to overcome these challenges, but it does seem irresponsible to waste trillions of dollars and thousands of lives on something we are pretty sure won't work.

2. Motivation

The primary initial motivation for New World colonization was $$$. The voyages of discovery were looking for trade routes to India to undercut the Muslim stranglehold on the spice trade. Initial Spanish colonization was focused on exploiting the mineral wealth of Mexico and Peru, French colonization on the fur trade, and English colonization on cash crops like tobacco.

In space, there is almost 0 monetary incentive for colonization. Satellites and telecommunications operate fine without any human astronauts, and even asteroid mining, which is a dubious economic proposition in the first place, doesn't really benefit from humans being in space. Everything kind of resource extraction that we might want to do in space is just better accomplished by robots for orders of magnitude less money.

What about Lebensraum? If that's really the issue, why don't we see the development of seasteds or self-sufficient cities in otherwise inhospitable regions of earth (the top of Everest for example).

3. Cost

Keeping an astronaut on the ISS costs about $1M/astronaut per day. And this is a space station that is relatively close to earth. Of course low earth orbit (LEO) where the ISS is, is halfway to most places in the inner solar system in terms of Delta V, so we're probably not talking about more than $10M/day per person for a Mars mission. For a colony on Mars with 100 people, that's close to a billion dollars a day. There is no national government, or corporation on earth that could support that.

Even if technology development by industry leaders such as SpaceX lowers launch costs by 1,000x, which I find to be an absurd proposition, that's still $1 million/day with no return on investment.

Even though SpaceX has improved the economics of launching to LEO and other near Earth orbits, our space capabilities seem to be degrading in most other areas. The promised Artemis moon missions are continually delayed by frankly embarrassing engineering oversights, and companies like Boeing, Lockheed Martin, and Northrup Grumman that were essential in the first space race can't seem to produce components without running over cost and under quality.

4. Narrative

This one is a little bit more speculative. The West, and much of the West of the world is entering a demographic spiral, with birth rates falling ever lower below replacement. This relieves a lot of the "population pressure" to colonize space, but also indicates a collapse in the narrative of progress that underpins the whole rationale that would lead us to even want to do such an absurd thing. If our leadership and population doesn't want to build the physical infrastructure and human capital necessary to embark on this kind of megaproject, doesn't this suggest that this dream is no longer appealing to the collective psyche? My read on the ground is that the general population is sick of the narrative of progress: we were promised flying cars and backyard nuclear power plants, but we instead got new financial instruments, addictive technology, and insurance.

China of course is held up as a positive example where the dream of the "engineering state" is kept alive, but I think this is misleading. China has potentially even worse of a demographic crisis than we do, and most of its smartest people (at least those I see in American academia) are desperate to leave.

Without a compelling narrative, the challenges facing potential space colonization become even more stark and difficult to overcome.

If AGI is real, and it’s really less than a decade away or whatever, won’t ‘ it ‘ just magically make all these issues null with its magnificence?

I have trouble understanding what I’m supposed to believe when it comes to AGI.

More directly to your post - we go to space because we can. A society that stops exploring, stops progressing (imo). We need to keep doing new and interesting space stuff, maybe colonization isn’t there yet, but we should be heading to the moon with the idea to plant a tree there.

There's a concept in card games - I've forgotten the name - where you play as if a card is in a specific location because if it isn't, you're doomed anyway.

It is possible that AGI could be built within a decade. However, if anyone builds it, everyone dies. If we're all dead, we don't really care whether our further plans are accurate or not. So, plans for the further future should assume that AGI did not, in fact, come within a decade. (Also, we should stop it, but we do need at least some plans for what to do afterward.)

Why does one assume AGI means everyone dies? I’m genuinely curious. Even if we assume that AI becomes a God Emperor, that doesn’t necessarily make it omnícidal.

Have you read The Sequences? The AI does not love you, nor does it hate you, but you are made of atoms that it can use for something else.

It's not an assumption, but a conclusion of three propositions:

1. Artificial General Intelligence will lead to Artificial SuperIntelligence carrying out its own goals.

There's no direct evidence for this (for obvious reasons), so maybe it's wrong, but it's really hard to come up with examples of technologies where we did manage to match nature but didn't manage to best nature soon afterward. We can fly 3 times higher and 18 times faster than any bird (or 10,000 times higher and 100 times faster, if you count spacecraft). We can lift 600 times more weight than an elephant, and dive 3 or 4 times deeper than a whale. We have alloys ten times stronger than bone, and weapons a hundred thousand times more lethal than any jaws. It's unlikely that the best medium to host intelligence is wet meat, and when we have better intelligence we're likely to get faster technological improvement, and if faster technological improvement leads to even better intelligence then the scope of that positive feedback loop is incalculable. Once the loop goes far enough, humans are no longer in it, and objecting to the subsequent directions it takes might be about as effective as chimps throwing feces at an incoming nuclear warhead. Either we get AI goals right from the start, or we don't.

2. Most goals that don't explicitly include "don't be omnicidal" end up implicitly entailing "be omnicidal", and even goals that do include "don't be omnicidal" can get closer to that then we'd be comfortable with.

I don't care much about ants, so I happily live in a home and drive on roads and go to buildings where we paved over all the ants that used to live there. I didn't hate those ants, it's just that they were using atoms which I wanted to use for something else, as the old saying goes. I do have goals that include "don't be omnicidal", even of ants, so if we got close to actually driving many ant species (or species that prey on them) extinct then I'd want to hit the brakes, but in the meantime I'll poison any ant hill that gets in the way of, say, having a slightly nice lawn.

3. It's nearly impossibly hard to accurately formalize our goals, and in the end all software is a formal set of instructions.

The worst software is software that was almost correct. Folks tried to write software and firmware for a particular new hard drive interface, but there was an incompatibility and it got the "edit the drive contents" part correct but not the "to what the user wants" part, and a friend lost his files. Folks try to write software to do things locally for its users based on what it reads in incoming internet packets, and sometimes they get the "read in incoming internet packets" and "do things locally" bits correct but not the "for its users" bits, and then a thousand computers are pwned by a Russian botnet. In those sorts of cases we just delete everything and restore from backup, but if software intended to edit the universe goes badly, we don't want to delete and we don't have any full backups.

This is the proposition that's gotten the weakest recently, now that we've basically given up on formalizing AI goals and are training them instead. I'd say it makes conclusions of Doom much less certain, and I'd love to say that it's made them weak enough to refute them ... but how well is the training going? AI still (albeit more and more rarely) even makes blatant mistakes of fact, including in cases where checking self-consistency and checking against external research could have corrected it. Mistakes of morality are much trickier. The is-ought problem means you've got to get ethics mostly right before self-consistency can help you correct any remaining mistakes. "External research" in questions of morality gets us to countless mutually-incompatible religions and ideologies, generally with many mutually-incompatible interpretations. AI alignment is unmoored from objective reality in a way that AI capabilities aren't, so it's still quite possible that the latter will greatly outpace the former.

Agreed with all of that except the neural-nets part. The problem with neural nets is that you literally don't know what the AI's goals are; training gives you something that does the things you train for during training, but it is agnostic as to why. You can easily, particularly at high intelligence, get something that does the things you want for instrumental reasons like "I don't want to be turned off/re-educated" (note that this is an instrumentally-convergent goal, and will thus pertain for most terminal goals) - and that will kill you the moment it gets a chance (note that, given it's smarter than you, you can't train against that, because fake chances to kill you will be detected and a real chance to kill you doesn't let you train afterward).

Furthermore, even if you do get some vague interpretability, it's not going to be reliable on something smarter than you (you cannot comprehend it as a whole; that's the whole point) and as you just noted, true positives are very, very rare and hence will still be massively-outnumbered by false positives.

Neural nets are mad science. GOFAI and uploads are a much-better plan - still immensely dangerous, but they're not just summoning demons and hoping.

EDIT: In case there's the "well, we're neural nets, and we learn morality okay" objection floating around in somebody's head: the problem with that is that humans are hardwired to be able to learn morality, not just learn to fake morality. Psychopaths are those people for whom this hardwiring fails (they can learn what ethics are just fine; they just don't care about them). This moral hardwiring was bred into us by evolution due to the millennia of tribe-on-tribe violence that made working together a winning strategy (given that humans are not really that different from each other in physical capabilities). We don't know how to duplicate that. So teaching neural nets morality will, at sufficient degrees of intelligence, just teach them to fake it. I listed uploads as being less insane than de novo neural nets because you'd be uploading the moral hardwiring as well without needing to comprehend it - it's still dangerous because the human brain is not designed for existence as software and various known and unknown mental illnesses may occur, but at least there's something to work with.