site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

I posted, but deleted this in response to a previous AI thread, but I think it actually aged better with Elon's signature to the letter yesterday and Yud's oped:

I am not a Musk fanboy, but I'll say this, Elon Musk very transparently cares about the survival of humanity as humanity, and it is deeply present down to a biological drive to reproduce his own genes. Musk openly worries about things like dropping birth rates, while also personally spotlighting his own rabbit-like reproductive efforts. Musk clearly is a guy who wants and expects his own genes to spread, last and thrive in future generations. This is a rising tides approach for humans Musk has also signaled clearly against unnatural life extensions.

“I certainly would like to maintain health for a longer period of time,” Musk told Insider. “But I am not afraid of dying. I think it would come as a relief.”

and

"Increasing quality of life for the aged is important, but increased lifespan, especially if cognitive impairment is not addressed, is not good for civilization."

Now, there is plenty, that I as a conservative, Christian, and Luddish would readily fault in Musk (e.g. his affairs and divorces). But from this perspective Musk certainly has large overlap with a traditionally "ordered" view of civilization and human flourishing.

Altman, on the other hand has no children, and as a gay man, never will have children inside of a traditional framework (yes I am aware many (all?) of Musks own children were IVF. I am no Musk fanboy).

I certainly hope this is just my bias showing, but I have greater fear for Altman types running the show than Musks because they are a few extra steps removed from stake in future civilization. We know that Musk wants to preserve humanity for his children and his grandchildren. Can we be sure that's anymore than an abstract good for Altman?

I'd rather put my faith in Musks own "selfish" genes at the cost of knowing most of my descendants will eventually be his too than in a bachelor, not driven by fecund sexual biology, doing cool tech.

Every child Musk pops out is more the tightly intermingled his genetic future is with the rest of humanity's.


In Yud's oped, which I frankly think contains a lot of hysteria, mixed among a few decent points, he says this:

On March 16, my partner sent me this email. (She later gave me permission to excerpt it here.)

“Nina lost a tooth! In the usual way that children do, not out of carelessness! Seeing GPT4 blow away those standardized tests on the same day that Nina hit a childhood milestone brought an emotional surge that swept me off my feet for a minute. It’s all going too fast. I worry that sharing this will heighten your own grief, but I’d rather be known to you than for each of us to suffer alone.”

When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.

I'm unclear whether this is Yud's bio-kid or a step kid, but the point ressonates with my perspective of Elon Musk. A few days ago SA indicated a similar thing about a hypothetical kid(?)

I once thought about naming my daughter Saffron in its honor. Saffron Siskind the San Franciscan, they would call her. “What a lovely girl in a normal organic body who is destined to live to an age greater than six”, the people would say.

In either case, I don't know about AI x-risk. I am much more worried about 2cimerafa's economic collapse risk. But in both scenarios I am increasingly of a perspective that I'll cheekily describe as "You shouldn't get to have a decision on AI development unless you have young children". You don't have enough stake.

I have growing distrust of those of you without bio-children eager or indifferent to building a successor race or exhaulting yourself through immortal transhumanist fancies.

In either case, I don't know about AI x-risk. I am much more worried about 2cimerafa's economic collapse risk. But in both scenarios I am increasingly of a perspective that I'll cheekily describe as "You shouldn't get to have a decision on AI development unless you have young children". You don't have enough stake.

I'll call your 'don't get a say on AI development unless you have young children' and raise you 'you don't get to have a say on abortion unless you have a uterus' or 'you don't get a say in gun control unless you own an AR-15' or 'you don't get a say in our adventures overseas unless you serve(d) in the military.'

What's the general principle you want to employ here, and if you want to restrict it to certain use-cases, what's your rationale? In theory we should all have a say in all aspects of how our society is run. Maybe in practice we don't want the specifics of highly technical questions like the storage of nuclear waste to be decided by referenda, but self-determination and broad involvement of the populace in moral questions seems to be a fundamental value of the western political tradition.

I think two of your analogies would be better formulated here to be more, well, analogous to the OP:

-"If you live in a gated community with quick police response, you shouldn't push so hard for gun control."

-"If you had a son who was eligible to be drafted into the military, would you support military intervention as eagerly as you do?"

You're taking what was explicitly called out as a cheeky framing of what is more of a heuristic for why I trust Musk more than Altman, people with kids more than single people when talking about the future of civilization and asking me to generalize it into a principle. But sure let's play with it.

All three of your examples are Mad-libs fallacies, they are written the same way, but actually point at the opposite of my argument (if taken as a 'principle)

  • 'you don't get to have a say on abortion unless you have a uterus'

  • 'you don't get a say in gun control unless you own an AR-15'

  • 'you don't get a say in our adventures overseas unless you serve(d) in the military.'

The more accurate analogy that fits with your examples is something like "You don't get a say on AI, unless you are working on AI" or own a LLM or something

But again, that is very far away from what I said. None of those examples are formulated to capture what I was talking about. They all angle at direct experience in the subject, with the partial exception of the abortion one, but that will quickly develop into an abortion debate.

Your examples are of agency in the policy based on exposure to the tools, while mine was agency based on effects of the outcomes. Again the abortion one only follows if you argue that the baby isn't a party with exposure.

but in theory, self-determination and broad involvement of the populace in moral questions seems to be a fundamental value of the western political tradition.

So this is the part that I disagree, and my first round on the Motte helped disabuse me of. AI risk is a good example of where this kind of libertarian ethic breaks down.

My "general principle" looks something like this, but it's really a heuristic not a principle

  1. If you are farming the commons, appeals to axiomatic autonomy and unlimited self-determination are weak.

  2. EVERYTHING you do is farming the commons, though unequally weighted.

  3. The more something farms the commons, the more it should be determined by those who's commons are affected than by the farmer's desires.

  4. Something about how, if you extend this to longtermism, you've gone to far.

You're taking what was explicitly called out as a cheeky framing of what is more of a heuristic for why I trust Musk more than Altman, people with kids more than single people when talking about the future of civilization and asking me to generalize it into a principle.

And you wrote a meandering post that went through Yudkowsky, Musk and Altman to conclude with being more concerned about economics than x-risk and why you and yours with children should have a say with regards to the future while those of us with 'transhumanist fancies' instead of children should not. Can you blame me for focusing on the only sentence in your post that was bolded when trying to distill a thesis?

asking me to generalize it into a principle.

I mean, I'm assuming there's some kind of framework behind your beliefs. You don't need a generalizable principle, but there needs to be more substance to your argument than "I have children and you don't therefore I decide" if you want to change my mind.

The more accurate analogy that fits with your examples is something like "You don't get a say on AI, unless you are working on AI" or own a LLM or something

Fine, I'll lay some cards on the table instead of being a pain in the ass.

Reasonable arguments seem to be that people should have a say in the decision-making process if (1) they will be affected by the outcomes or (2) they have significant expertise in the area such that we think they'll make better choices than average Joe. I can imagine arguing for a flat system ('one person one vote') or a technocracy (decisions made by committees of experts) and our society falling in between.

Example (i), abortion: Women will obviously be affected by the outcome of the abortion discussion to a greater extent than men. Certain people would argue that they also know more about it than men (I can vouch that my female friends with children are certainly more intimate with the details of pregnancy, childbirth and nursing than their husbands) but that's a rabbit hole I'd probably rather avoid so you can strike it from the record if you like.

Example (ii), guns: AR-15 owners are obviously affected by the outcome of gun control regulation (confiscation of their arms) and arguably more knowledgeable about at least the mechanics of shooting and gun ownership.

Example (iii), military: Active military obviously have more of a stake in foreign policy decisions given that they'll be the boots on the ground, and seem highly likely to know more about the military and foreign engagements than your average civilian.

So no, I disagree with this statement:

None of those examples are formulated to capture what I was talking about. They all angle at direct experience in the subject, with the partial exception of the abortion one, but that will quickly develop into an abortion debate.

Each of those examples has a stakeholder that will be deeply personally affected by the policies in addition to having more (as you put it) direct experience with the subject than the average person.

AI risk is a good example of where this kind of libertarian ethic breaks down.

Perhaps I misspoke by saying 'self-determination.' A say in the direction of the community and nation-state in which they live may be more accurate.

The more something farms the commons, the more it should be determined by those who's commons are affected than by the farmer's desires.

Can you explain what you mean by farms the commons, and concretely what that refers to in this case? It carries connotations of private enterprise benefiting from subsidies or avoiding dealing with the externalities of their actions, but I assume that's not what you're going for here.

I think this splits too quickly into a discussion about principled views that I would be happy to have under separate cover. I'd rather revert to my only real point that, as a parent the concerns of other parents about their children is a force of commonality and a potential for alignment. I recognize that in Elon Musk to an extent, and I was surprised to see both Elizier express sentiment that at least the child of a loved one is top of mind for him.

I am of course, aware of the ways appeals to children can be an emotional camel's nose into the tent of control. But my perspective is to ask, why it works and whether that reason is not always wrong.

People with kids, and people with traditional families (neither Elon, nor Elizier have the latter) are going to weigh future planning differently than those without. Am I, someone invested in the survival of the traditional human family wrong to prefer the leaders and those with power over transformative technology to share my experiences and values?

Generally speaking I want leaders and decision makers and people with power to share my values (as does everyone everywhere all the time. Just because the liberal's values are liberalism, doesn't mean that their desire for leaders to prioritize liberalism isn't the same exact impulse). AI is not an exception to that, and might be, rather the most extreme case in my lifetime.