Suppose you are in the business of making charity recommendations to others. You have found two good charities which you might recommend: 1) Help Ugly Children, and 2) Help Cute Children. It turns out ugly children are twice as easy to help, so 1) is the more effective place to send your money.
You are about to recommend HUC when it occurs to you that if you ask other people to help ugly children, some large fraction will probably ignore your advice, conclude that this effectiveness road leads to madness, and continue to support 3) Entertain Affluent Adults, which you believe is much less effective than HUC or HCC. On the other hand, if you recommend Help Cute Children, you think everyone will take it up with passion, and much more good will be done directly as a result.
What do you recommend?
The answer to the question depends on your self-esteem. Everyone wants to do more “good”. The only ones who would compromise that are those who still need to work on themselves. So the question is how cheerleading for ugly kids helps you. And it basically just gives you contrarian status, which is non-trivial.
Segment your audience. Start two organizations with different marketing, one of which recomments HUC and one of which recommends HCC. (See: GiveWell vs. The Life You Can Save.)
There’s a short-run vs long-run trade-off here. Short-run, you may get more donations and utility via promoting cute children, whereas long-run you have more credibility by promoting the ugly children if it is actually more effective. So depends in part on how long you think you / your organization will be around, and how good of a memory your audience has.
In the same way that markets have learned to navigate the labyrinthine requirements of doing price discrimination without offending people, I think EA must learn to navigate the concept of cause-agnosticism. This is similar to what Ben is saying about audience segmentation. Corporations will segment their markets with different branding etc. Getting people excited about effectiveness vs a specific cause is asking them to go meta, and thus a tough sell. Ultimately, spreading cause-agnosticism as a meme seems like it would be very high impact if successful. But in the short term there are probably advantages to simply accepting that you need to segment your audience.
I don’t believe this is a moral question. To help anyone in this manner is a pro-social act, but for me there’s no moral obligation to do so.
One way to illustrate this is to imagine that you pass a lake every day where there’s a person constantly pushing-in and thus drowning children. You pass the pond once a day and can save a child, You could even dedicate your life to the charity Pond Watch and save a fair proportion of them. However, neither action tackles what I see as the moral issue here, which is the pushing-in of the children.
For me all the moral blame falls on the murderer and if any moral obligation exists for me here at all it’s to expose them or prevent them from continuing to push children in the pond.
This is a poor analogy, because there’s rarely a morally responsible “murderer” that you can deal with. Malaria parasites and hurricanes are not the kind of things you can assign moral blame to. “People don’t have a duty to distribute bednets; mosquitoes have a duty not to bite people” is a rather useless thing to say.
Thanks for the reply Doug.
You’re right, my illustration uses a case where there is a moral agent causing the problem, which is not specified above.
So yes, I should have said that in those cases I see no moral duty, only optional pro-social acts (which I do support), whereas if agents (or systems run by agents) can be identified, then I do consider it a moral question as pre my last comment.
My point is that many problems in the world (famines, disease etc) do have causes that are rooted in the socioeconomic systems we choose to provision ourselves globally, and in those cases – for me – the moral onus falls squarely on the system and those who run it, rather on the general public in providing charity.
Excellent post. My answer: We already agree that charity recommendations are and should be made relative to some normative/epistemic assumptions. (E.g., GiveWell classic is broadly consequentialist, values people over animals, and errs on the side of proven effectiveness.) So I would just explcitly hoose a framework that excluded helping ugly children, and then you could honestly recommend helping cute children within that framework.
Estimate the probability, p1, that a randomly-selected non-EA philanthropist will support HUC at your recommendation. Then do the same, p2, for HCC. If 2p1 > p2, advocate HUC. Else, advocate HCC. The real question is: how do you generate the probability estimates?
“much more good will be done directly as a result”
You post your ranking based on your actually assessment, you allocate advertising. (or even the scarce resource of what you talk about during interviews) based on the expected value of that publicity. Optimal PR donation is different than optimal dollar donation.
So Post the rankings, but always mention how much more effecting Helping Cute Children is than Entertain Affluent Adults.
You could also make your default table ranked by a measure of the expected value of that charity having a higher spot in the table, but still cite the raw utility/dollar number, and allow sorting on it.
This would encourage charities to optimize their image to get more donations from people who look at the top of EA rankings. This might not be what you want to optimize for, but it does seem like it would improve the conversion funnel overall.
If we get a charity “You won’t believe 10 reasons we’re so effective” that is only slightly better than average but gets everyone to click thru and donate, is that bad?
If you recommend HCC and they become a supporter, they might become more amenable to supporting HUC in the future. “Look, you’re already trying to help children — you can do an even better job by helping the ones that everyone else overlooks!”
Pingback: Recommend what the customer wants | Meteuphoric