site banner
Jump in the discussion.

No email address required.

The article seems to undermine itself. The only way the author can comment on the utility of believing untrue things is by providing examples of untruths the author knows are untrue and are only believed to be true by others. In other words, in every provided example, the author is apparently an exception to his own rule because he is capable of apprehending these benefits without actually believing untrue things.

Which leads me to think that this enlightened state of knowing what we need to do to create a better is possible without believing untrue things in the first place. And if we can get there, it seems like we should.

And if your retort is that some/many people are not capable of that enlightenment, then just be honest about what you're really doing here: you're promoting the noble lie for the rubes.

If you all happen to believe there’s an omnipotent power that rewards good behavior after you die, and you’re in a close-knit community where betrayals have huge social costs, then you can all trust each other.

I suspect the second factor is doing a lot more work than the first. (Suppose the diamond marketplace were 50/50 Jews and Jains; would we expect each trader to do exactly as much business with their own group as with the other?)

Maybe right and wrong don’t “really” exist. But it’s tough out there and you don’t want people will hurt you. So you loudly advertise that if anyone defects against you, you’ll go out of your way to punish them—even “irrationally” hurting yourself if that’s necessary to get revenge. To make this threat maximally credible, you adopt as core beliefs that right and wrong do exist and that defectors are wrong.

I'm not sure I'd count adopting a certain game-theoretic strategy as "believing an untrue thing" - a strategy can be effective or ineffective, but it can't be "false". The article seems to imply that this is an example of a false meta-ethical belief, but this is a controversial assertion and we can't take for granted that everyone understands "right" and "wrong" in the same way. (Maybe I've just defined "right" as whatever strategies are effective?) Which leads to the next point:

At least in recent history, people on both sides of wars seem to believe they are fighting for the side of good. Obviously, that can’t be right, and in a sense, two such parties fighting should be cause for them to sit down and work through Aumann’s agreement dynamics.

A disagreement over values isn't like a disagreement over facts - we wouldn't expect it to be resolvable through Aumann's agreement theorem even if both sides were being perfectly rational.

I agreed with the gist of the article, but I can't help but wonder if this topic was 99% covered by LW at some point.

The Dark Arts post (and others like it) covers this topic pretty well.

LW is so big that the odds are close to 1 that its been covered there

I think we live in time when even the most original ideas out there were probably already written by someone on the Internet. Especially if they make sense or are important.

I think this cluster of posts deals with it, though not from an endorsing perspective. There may still be more I haven't thought of.