site banner
Jump in the discussion.

No email address required.

No. Humans are fallible and inconsistent irrespective of this logic, but that's a separate issue. By this logic, precommitment to utilitarian moral calculus plus consequentialist decision theory is definitionally incompatible with credible promise of context-agnostic rule observance, regardless of generic fallibility in practice; a perfectly rational and consistent Singerian reasoner would not be perfectly trustworthy, whereas a perfect Kantian would be. And in fact a perfect Singerian would be much less trustworthy than the normal issue, because he'd see more loopholes and opportunities to rob Peter for helping n>1 Pauls without second-order complications, and would consistently opt to pounce on them. In practice, all humans, even Singer I guess, are partially deontological too, and have moral compunctions that rely on things like habits and naive empathy, not to mention the fear of getting caught.

I believe that the difference between doctrines in theory indicates the direction one would practically skew in.

And in a comparative sense, a human-who-claims-to-be-a-consequentialist can precommit to anything just as much as a human-who-claims-to-be-a-deontologist

...Yes, people can deceive.

large EA orgs, and large orgaizations of claimed utilitarians, seem to organize better than comparison orgs of normies in charity or philosophy

I am not at all sure if EAs organize better in some measurable sense than old school charities. They're very likely better at minmaxing specific interventions, and have had a growth spurt lately. But e.g. The Giving Pledge seems like a much bigger deal. And anyway, Eigenrobot and Hanson argue that Charity is not about Helping, so it may be that old charities have been organizing very well, for their purposes.