There are many ways in which what humans do differs from what they should do if they wanted to achieve the ends they claim to want to achieve. Some of these are obviously because people don’t really want what they say they want. Few people who claim human life is valuable beyond measure are unaware that small amounts of money can save lives overseas for instance.
On the other hand, many cases are obviously innocent failures of imagination or knowledge. The apparent progress humanity has made over recent millennia is not just a winding path through various signaling equilibria; we have actually thought of better stuff to do. The stone age didn’t end because making everything out of stone stopped being a credible sign of a hardworking personality.
In between there are many interesting puzzles where it isn’t clear whether hidden motives or innocent failure are to blame*. Many people strongly prefer innocent failure as a default, but in general if you can think of some improvement to the status quo, it should be pretty surprising if heaps of other people haven’t also thought of it. Even if your idea is ultimately bad, there should be some signs of people having looked into it if its deficiency isn’t obvious. Often it is clear that people have known of apparently good ideas for ages, with no sign of action. So I think there is quite a case for hidden motives explaining many of these puzzles.
Sometimes when I point out such instances, I say something like ‘ha, you aren’t trying to do what you claim – looks like you are secretly trying to do this other thing instead’. Sometimes I say something like ‘if you are trying to do X, maybe you should try doing it in this way that would achieve X rather than that other way that doesn’t seem to so well’.
I’d like to make clear that my choice of explicitly blaming hidden motives vs. suggesting alternatives as though innocent failure were the cause is not necessarily based on how likely these two explanations are. I think either presentation of such a puzzle should suggest both hypotheses to some extent. If I blame hidden motives and you feel you don’t have those hidden motives, you should question whether you are behaving efficiently. If I blame innocent failure, and you don’t feel compelled to fix the failure, you might question your motives.
I expect the truth is usually a confusing mixture of hidden motives and innocent failure. In many such intrapersonal conflicts, it seems at least clear which side outsiders should be on. For instance if two parts of a person’s mind are interested in helping other people and looking like a nice person respectively, then inasmuch as those goals diverge outsiders should side more with the part who wants to help others, because at least others get something out of that.
Outsiders are also often in a good position to do this, due to their controlling influence on the part who wants to look like a nice person. They are the people to whom you must look nice. This means they can often side with the more altruistic part (or even if there isn’t one, for their own interests) just by insisting on higher standards of credible altruistic behaviour before they will be impressed. This is one good reason for pointing out what people should do better if they really cared, even if it seems unlikely that they do. Even if not a single reader really cares, one can at least hope to give them a measure by which to be more judgemental of others’ hypocrisy.
*The other very plausible explanation for a discrepancy between what seems sensible and what people do is always that people are in fact behaving sensibly, and the perplexed observer is just missing something. While this is presumably common, I will ignore it here.
I have to say that I disagree with “few people who claim human life is valuable beyond measure are unaware that small amounts of money can save lives overseas.” I can tell that I disagree with it because I was about to tweet it and realized that it was the opposite from what I thought it was (didn’t see the “un”). When I point out to people that you can save a life probabilistically for ~$1000, most people are surprised and didn’t realize that there was actually a way to quantify it. Not sure whether they’d think the cost would be higher or lower a priori, though. It’d be interesting to see some data on this.
By the way this is actually a case of you using the word “obviously” (kind of like “of course”) where your argument is less strong.
“This is one good reason for pointing out what people should do better if they really cared, even if it seems unlikely that they do. Even if not a single reader really cares, one can at least hope to give them a measure by which to be more judgemental of others’ hypocrisy.”
I agree that this strategy is genuinely useful, but I think there’s a downside worth taking account of.
I’ve been playing around with the idea that people are actually self-interested, by which I mean caring ultimately only about things which were correlated with inclusive genetic fitness in the EEA, and that our wider forms of altruism and cooperation come out of strategies to cooperatively pursue these things (and incidentally will continued to be pursued even if we are explicit about our goals). It’s been making a lot of sense, more sense than any alternative explanations I’ve tried before, and also presenting a lot of useful levers for my attempts at rationality. But we’ll see how it holds up over time.
If we increase judgment of being self-interested in order to incentivize people to gain social standing by actually doing good, we also increase incentives not to view yourself as self-interested. If it’s true that we actually are, and we can’t change this without brain-modification (but on reflection wouldn’t want to, at least as long as we can cooperate as-is), then we’re making it harder to accept ourselves as we are. And I think that accepting someone as self-interested is quite possible, as a self-interested person can have plenty of motivation to be genuinely honest and to refrain from defection against others, and can and will have true and genuine friendships, etc. Just as a rational person should do better than an irrational person (the rational person not being condemned to eternally two-box), a self-interested person should achieve more for themselves than someone who isn’t (the self-interested person not being condemned to be deceitful, untrustworthy, and unloved).