site banner

Culture War Roundup for the week of July 7, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Adoption is an even worse offender if you take this line of thinking. At least with surrogacy the client is usually genetically related to the children.

Adoption is an even worse offender if you take this line of thinking.

No significant argument here. The kind of adoption process that involves traveling the world to find the perfect orphan is straight-up child buying, and has some moral similarities to eugenic embryo modification.

The kind of adoption process that involves traveling the world to find the perfect orphan is straight-up child buying, and has some moral similarities to eugenic embryo modification.

You say that like it's a bad thing. I consider both of them to be fine, not that I'm looking to adopt, or at the very least "not my business".

I am somewhat unfairly advantaged by having been a lurker long before I started posting, but you and I have significantly different points of view on what constitutes “a bad thing.”

That said, I once believed the trajectory of human civilization was in the direction of being golden gods, so I have hopes that you’ll come around. [[Insert positive emoji of your choice]]


Completely unrelated, but I have often wanted to pick your brain on your, trying to be fair here, significant concern around death. Correct everything wrong about my interpretations of your ideas, but you seem to be very focused on instantiating uploads and achieving eternal cyber life of the mind at some point in your expected lifespan.

It seems trivial to me that a society that can actually achieve that goal, is on the cusp of being able to simulate every mind that ever lived, and any arbitrary number of minds that didn’t. So if you think it’s inevitable that simulation will happen, what’s your concern about dying? The pain will suck, I’m sure and that’s fair, I’d enjoy a golden god body too, but it seems highly likely given your priors that you’ll just go to sleep and then wake up a simulation at some unspecified point in the future, with no sensation of loss.

What do you think? Do you want to spin this conversation off somewhere else?

Jokes aside, given that you were, presumably, at some point a transhumanist, I would expect that you have better arguments against it than the average person. I would be happy to discuss them, if you cared to.

It seems trivial to me that a society that can actually achieve that goal, is on the cusp of being able to simulate every mind that ever lived, and any arbitrary number of minds that didn’t. So if you think it’s inevitable that simulation will happen, what’s your concern about dying? The pain will suck, I’m sure and that’s fair, I’d enjoy a golden god body too, but it seems highly likely given your priors that you’ll just go to sleep and then wake up a simulation at some unspecified point in the future, with no sensation of loss.

A post-Singularity civilization with Dyson spheres and Matrioshka brains has a lot of energy and computational power, but it is not infinite. The sheer number of computations needed to accurately simulate a single human brain is a subject of considerable debate. In their landmark roadmap on Whole Brain Emulation, Anders Sandberg and Nick Bostrom surveyed estimates that ranged from 10^18 to 10^21 floating point operations per second (FLOP/s). Let's be charitable and take a figure on the lower end, say 10^18 FLOP/s, just for a real-time simulation.

Now, let's consider the scale. The Population Reference Bureau estimates that about 117 billion modern humans have ever been born. Of those, about 109 billion have died. If we wanted to bring them all back for, say, a simulated century, the total compute required would be 109 billion people * 100 years * 3.154e+7 seconds/year * 10^18 FLOP/s. This works out to something on the order of 10^41 total floating point operations. That's a big number, but within budget for a Kardashev-II. But this calculation completely ignores the monumental task of getting the data in the first place. This is, as I see it, the fatal flaw:

For the vast majority of those 109 billion deceased humans, the information that constituted "them" is gone. Utterly. This is what cryonicists call "[information-theoretic https://en.wikipedia.org/wiki/Information-theoretic_death)". It is the point at which the physical structures in the brain that encode memory and personality are so thoroughly disrupted that no technology, even highly-speculative, could in principle recover them. Think cremation, advanced decomposition, or having your constituent atoms scattered across the globe by the nitrogen cycle.

Let give an example:

Unga and Bunga were, someone claims with authority, Siberian cavemen from 27,000 years ago. They were unfortunately, not lucky enough to be frozen in permafrost. They have decayed, and all we're left with is maybe a fragmentary sample of DNA or the odd bone. We know next to nothing about their lives, their language, or their culture.

Even with an unlimited budget, we could not bring them back. I don't think even the most powerful ASI around could.

The challenge is not just that ancient DNA is typically fragmented and damaged, but that DNA is merely the blueprint for the hardware. The "software," the connectome, the specific synaptic weights, the epigenetic modifications, the entire lifetime of learned experiences that made Unga Unga and not just some generic instance of his genome, has been overwritten by entropy. The information is lost, and the second law of thermodynamics is a harsh mistress. Reconstructing them from scratch would be, if not a physical impossibility, close enough for government work.

Even for a modern person with a vast digital footprint, the problem remains. You could fine-tune an LLM on my blog, my comments, and every email I've ever sent. You could supplement it with video, audio, and detailed recollections from my family. The result would be a very convincing "self_made_human" chatbot, but it would not be me. It would be a sophisticated imitation, a high-fidelity echo that lacks my internal continuity of consciousness.

I demand far higher fidelity. I want to test such an entity in both a black box, and ideally, look at the simulated neurons and the information they contain.

The same problem applies to cloning. You could take my DNA, create a clone, and raise him in an environment designed to replicate my own. But how would you transfer my almost 30 years of memories, my specific neural pathways? He would be, at best, my identical twin brother, separated by a generation, not a continuation of my existence. He and I would agree that he is not me. Not in the same manner as the me of yesterday or from next year. There just isn't enough data.

(If he doesn't, then something definitely broke along the way)

This brings us to cryonics. It's not a guarantee, not by a long shot. We do not know for certain what level of structural and chemical detail must be preserved to retain the information essential for personal identity. But it is, at present, the only method that even attempts to bridge the gap between clinical death and information-theoretic death by preserving the brain's physical structure.

As others have argued, the decision to sign up can be framed as a question of expected value. If you assign even a tiny, non-zero probability to the success of future revival technology, the potential payoff is a lifespan of indeterminate, potentially astronomical, length. It's a bet against the finality of decay, a wager that our descendants will be much, much smarter than we are. I find that a slim hope is infinitely better than no hope at all, but I would still very much prefer to not die in the first place.

What do you think? Do you want to spin this conversation off somewhere else?

Well, that's what I think haha.

I now mildly regret that whopper of a post being so far down thread. I usually prefer to write out in public instead of DMs (I'm vain enough to want the public to praise me for the work I do). You're welcome to reply here, or post a top-level comment elsewhere, or whatever you please. Even DMs, though I would be saddened by going through all this effort behind closed doors.

If the chatbot doesn't quite have your 30 years of memories, but can make an impression that would fool anyone else, what's the difference?

It's just that I feel like your arguments prove too much, as the expression goes. If there can be such a thing as "not enough data", then indeed how can you place a cutoff point? There's never all data. You of today don't have all the data on the you of yesterday.

If the chatbot doesn't quite have your 30 years of memories, but can make an impression that would fool anyone else, what's the difference?

It makes a difference to me. I'm the customer here! If I'm physically around to evaluate the claim, then presumably there's some kind of non-destructive mind upload going on. I wouldn't consent to a destructive one unless I had no choice, or if I was sufficiently convinced by evidence that it highly accurately captures almost all behavior and internal state.

If I died without hope of recovery, then I have no control over what others get up to. If they want to run a fine-tune of GPT o5 that mimics me via text, in an unending simulation of The Motte, and names that thing self_made_human, what can I do about it? Even I think that's a strict improvement over being dead and entirely forgotten.

It's just that I feel like your arguments prove too much, as the expression goes. If there can be such a thing as "not enough data", then indeed how can you place a cutoff point? There's never all data. You of today don't have all the data on the you of yesterday.

As far as I am aware, there is no principled and rigorous way to answer that question, at present.* It rarely comes up in normal life, because humans can't trivially clone themselves with their memories.

Some people think they live on through their children. Some think it's the books they write, or the good they do? That's good for them, or at least good enough. Highly inadequate for me.

And the prompting question was why I seek immortality, proper immortality and not word-play. That's my answer.

*I have strong intuitions on the matter, and seek to see if science and engineering can make them rigorous. I strongly believe that there's a meaningful and objective way to compare similarity between minds, in the manner you can generate embeddings on text.

If I go to bed right now, and wake up again, the new self_made_human and I will be virtually indistinguishable, even to ourselves. So we have no qualms about calling ourselves the same person. If you want proof, ask me this question again tomorrow, I guess.

The same holds for SMH from last week, less so from last year, even less so than 20 years ago. It will also continue to become less true with time. But I consider such divergence both unavoidable at present, and also entirely acceptable. I want to be able to grow and improve, ideally in a self-directed fashion.

There are operations I could undergo that wouldn't preserve identity. Say developing Alzheimer's, or having a lobe of my brain removed.

Even a clone of me with no shared memories would be a very similar person. We'd get along well, I'd treat him like family. I'd probably give him money if he asked. But he wouldn't meet my threshold for being the same person.