Suppose you are in the business of making charity recommendations to others. You have found two good charities which you might recommend: 1) Help Ugly Children, and 2) Help Cute Children. It turns out ugly children are twice as easy to help, so 1) is the more effective place to send your money.
You are about to recommend HUC when it occurs to you that if you ask other people to help ugly children, some large fraction will probably ignore your advice, conclude that this effectiveness road leads to madness, and continue to support 3) Entertain Affluent Adults, which you believe is much less effective than HUC or HCC. On the other hand, if you recommend Help Cute Children, you think everyone will take it up with passion, and much more good will be done directly as a result.
What do you recommend?
Here are some of my own thoughts.
First, it depends on what you are claiming to do.
If you claim to be recommending ‘something good’, or ‘something better than EAA’ or anything that is actually consistent with recommending HCC, then probably you should recommend HCC. (This ignores some potential for benefit from increasing the salience of effective giving to others by recommending especially effective things).
If you claim to be recommending the most effective charity you can find, then recommending HCC is dishonest. I claim one shouldn’t be dishonest, but people do have different views on this. Setting aside any complicated moral, game theoretic and decision theoretic issues, dishonestly about recommendations seems likely to undermine trust in the recommender in the medium run, and so ultimately lead to the recommender having less impact.
You could honestly recommend HCC if you explicitly said that you are recommending the thing that is most effective to recommend (rather than most effective to do). However this puts you at odds with your listeners. If you have listeners who want to be effective, and have a choice between listening to you and listening to someone who is actually telling them how to be effective, they should listen to that other person.
Perhaps there should just be two different recommendations for different groups of people? An ‘effective’-labeled recommendation of HUC for effectiveness-minded people who will do it, and a something-else-labeled recommendations of HCC for other people. (Some readers last time suggested something like this).
I think this makes things better (modulo costs and complication), but doesn’t resolve the conflict. Now you have two categories of people, and for each category there is a most effective thing to suggest to them, and a most effective thing for them to do.
The main conflict would disappear if the most effective thing for you to recommend on the values you are using was also the most effective thing for your listeners to do, on their values and in their epistemological situation.
I think a reasonable approximation of this might be to choose the set of values and epistemological situation you want to cater to based on which choice will do the most good, and then honestly cater to those values and epistemological situation, and say you are. If your listeners won’t donate to HUC because they value feeling good about their donations, and they don’t feel good about helping ugly children, and you still want to cater to that audience, then explicitly add a term for feeling good about donations, say you are doing that, and give them a recommendation that truly matches their values.
This will probably often run into problems. For instance, the general problem that sometimes (often?) people’s values are too terrible to be spoken aloud, and they certainly don’t want to follow a recommendation that endorses it. e.g. perhaps they are sexist, and will in fact devalue recommendations that help girls. Yet they are not going to follow recommendations that are explicitly for effective sexist giving. This seems like a different kind of general (though closely related) problem that I won’t go into now.
In sum, I think it is dishonest to advertise HCC as the most effective charity, and one shouldn’t do it. Even if you don’t have a principled stance against dishonesty, it seems unsustainable as an advice strategy. However you might be able to honestly advertise HCC as the best charity on a modified effectiveness measure that better matches what your audience wants, and something like that seems promising to me.
At one point you said that you thought it was useful to be dishonest with others, in order to be able to be more honest with yourself. Did you change your mind about this, or are you simply saying that it is bad to be dishonest about this particular matter? Or is “I claim one shouldn’t be dishonest” simply one of the cases where you are being dishonest with others while being more honest with yourself, since it is better if people view you as honest, but at the same time you can tell yourself the real truth even if people would not like it?
If you haven’t seen it before, a lot of the ethics literature on “indirect consequentialism” is relevant to this. A couple canonical citations are:
Cocking and Oakley – Indirect consequentialism, friendship, and the problem of alienation
Railton, P. (1984) – Alienation, Consequentialism and the Demands of Morality.pdf
I think that bad/secret/shameful values are a relatively minor problem.
The difficulty of entering into a shared epistemic situation, however, is a crippling problem.
It’s also unfortunately the case that given how credit is generally given, if your goal is to maximise money/power as they are usually conceived, it’s usually most effective to be dishonest in order to gain short term cash-flow and use that to establish credibility with a larger, less discerning audience with lower truth standards who won’t be upset when your dishonesty inevitably catches up with you in they eyes of your initial core audience. This well known phenomena is called ‘selling out’. David Chapman’s excellent ‘Geeks, Mops and Sociopaths’ discusses the problem well but lacks a solution.
One solution is to publish a database of effective choices instead of making single recommendations to each customer. The database would have columns for various properties of each option (area of influence, $value/$spent, &c.) and the customer could sort on the property in which they are most interested. If I was concerned that the customer would make sub-optimal choices if left to their own devices with the database, I could put an agent between the two (short questionnaire, customer service rep., &c.) to guide the customer to the option that aligns most closely with their own unique values.
Pingback: GiveWell: a case study in effective altruism, part 4 | Compass Rose