People tend to give small amounts of money to many charities instead of a lot to one favorite charity. It has been noted that this is irrational behaviour, assuming one cares mainly about the recipients. It is rational though for people who are purchasing ‘warm fuzzy feelings’ or signals of charitableness. So those are the usual explanations.
This nice experiment, via marginal revolution, suggests another explanation:
Every year, 90% of Americans give money to charities. Is such generosity necessarily welfare enhancing for the giver? We present a theoretical framework that distinguishes two types of motivation: individuals like to give, for example, due to altruism or warm glow, and individuals would rather not give but dislike saying no, for example, due to social pressure. We design a door-to-door fund-raiser in which some households are informed about the exact time of solicitation with a flyer on their doorknobs. Thus, they can seek or avoid the fund-raiser. We find that the flyer reduces the share of households opening the door by 9% to 25% and, if the flyer allows checking a Do Not Disturb box, reduces giving by 28% to 42%. The latter decrease is concentrated among donations smaller than $10. These findings suggest that social pressure is an important determinant of door-to-door giving. Combining data from this and a complementary field experiment, we structurally estimate the model. The estimated social pressure cost of saying no to a solicitor is $3.80 for an in-state charity and $1.40 for an out-of-state charity. Our welfare calculations suggest that our door-to-door fund-raising campaigns on average lower the utility of the potential donors.
Assuming that it is more costly to refuse to give the first dollar than the second, and so on, people give to a lot of charities because they are purchasing ease from social pressure (or whatever you want to call this), and a lot of charities are attacking then with social pressure.
I think this explains some of the trend, but not near all. However I haven’t seen the data for how distributed giving is just on occasions that people seek out charities.
Maybe the campaign for efficient charity can have some effect on this section of givers. It provides a convincing excuse. I don’t feel so bad declining those who solicit donations when I can claim that as soon as they make the top of Giving What We Can or Givewell’s lists I will be morally permitted to consider them. Users of this excuse need not actually donate anything to better charities however.
There should be more links, but I’m typing on a phone. Turns out to be less awkward than I imagined, except adding links.
Much morality is internalized in terms of the categorical imperative or the golden rule. When people donate, they often do it because they believe that’s what everyone ought to do. But few people want every cent to go to the (currently) most efficient charity or (though more likely) to their favorite charity. To act in such a way as to exemplify a simple rule that everyone could be held to, they distribute their contributions.
This is intuitively obvious but it’s nice to see it demonstrated. I use it this way. It lowers the cost to me of refusing to give to beggars and so on because I realise it’s not an effective way to give.
On the other hand, GiveWell and so on, by making such a strong case that giving away my money is the right thing to do, may have lowered my welfare even more effectively!
“People tend to give small amounts of money to many charities instead of a lot to one favorite charity. It has been noted that this is irrational behaviour, assuming one cares about the recipients.”
and links to an article by Steven Landsburg. I’m sorry, I don’t actually see an argument in that article, just an assertion that $200 to charity A or $200 to charity B must always be a better use of your money than $100 to both, and then specious psychological theorizing about why people spread their charity dollars around instead.
Isn’t the conclusion trivial?
What if the truly optimal choice is $120 to one charity and $80 to the other?
In more detail: Landsburg reasons as if charities have a single scalar output (goodness or utility) and are distinguished only by their efficiency. But you can also say: charities work on different components of my utility function, which is a weighted sum of diverse values, so if I assume equal efficiency of charities, then my optimal donation strategy will allocate funds to charities in proportion to the relative importance of the specific values they help to realize.
Another reason for diversified giving is hedging. What if some of the money ends up being lost or misused? Diversity is an obvious strategy for increasing the chance that some of it will do some good.
The argument is just that this behaviour doesn’t make sense if you want to help other people. If you want to do something else in your utility function, sure.
re hedging, see discussion: https://plus.google.com/100781503468861588693/posts/ftYLKbaT2cU
Still waiting to see the argument, i.e., *reasons* to believe that “this behaviour doesn’t make sense”.
I’m having a hard time thinking of other situations where Landsburg’s conclusion applies. You don’t spend your life eating just the single best food or using just the single best word.
Did you see my response at the link I gave you above? Diversifying your giving means less expected welfare for the recipients, and they don’t benefit from your hedging (though you might). What other kind of argument do you want?
Hedging is one thing, but Mitchell’s other argument is stronger: “But you can also say: charities work on different components of my utility function, which is a weighted sum of diverse values, so if I assume equal efficiency of charities, then my optimal donation strategy will allocate funds to charities in proportion to the relative importance of the specific values they help to realize.”
A person with a utility function like Mitchell describes would ordinarily be described as “wanting to help other people.” The difference is a person with that function doesn’t think of undifferentiated, fungible helping. That people don’t operate the way you suggest suggests not that people don’t want to help people but that people aren’t utilitarians. Someone might make _separate_ virtues out of giving to the poor and giving to advance scientific research. The distinctions can be finer than that.
Suppose there are two charities, a friendly singularity charity and an existential risk charity. The situation is that if we reach a singularity, the value systems of the first AIs will dominate the rest of history; but there is also a significant possibility that the human race will be wiped out by nanotechnology before a singularity can even happen. Both charities are underfunded.
Does anyone still want to argue that it is obviously irrational to donate to both charities rather than just one?
Yes, I think that would be irrational (given the scalar utility function artificially imposed by utilitarians). I think you’re not distinguishing two questions: should only one be funded or should any given individual contribute to more than one. You can want both to be funded, but at any time, one or the other will have the greater marginal value. The exception would be if you’re very rich. After you contribute so much to one, if you have enough to give, diminishing returns could theoretically set in. Then, and only then, should you divide your contribution.
This assumes that you’re able to tell which one is objectively more important to support, and it also assumes that you can’t make your small donations expressive of your personal strategic assessment by using them to target particular sub-projects of the charities. But in the real world, we don’t get to know the exact best thing we could be doing, and we do often get the option to target our donations in this micro-strategic way.
You say that giving to two charities at once is only a matter for the very rich. Can you say just how rich you have to be? What if you have a choice between some local, very small-scale charity where $100 is the most they could use, and then one of Landsburg’s hopeless causes, where you could give well over $1 million and, though you would do some good, you would still not be solving the overall problem? Are you able to tell me that it is definitely better to spend $200 on the hopeless cause, rather than $100 on the fully solvable problem and $100 on the hopeless cause?
Perhaps my comment about being very rich was misleading because it really isn’t a matter of how rich you are but whether you can give enough that diminishing returns reverses the priorities.
We’re imagining here that we’re both utilitarians—which would seem is false for both of us. You make your most rational estimate of the expectation value of two charities you’re partial to: A, animal shelter: B, bomb shelter. You decide that the expectation value in pure utiles is higher for A.
Your point relates to handling uncertainty. But uncertainty is already included in your estimate of the expectation value. If you’re uncertain about the value of both contributions, it remains the case that if you cover the uncertainty by shifting resources to the less promising charity, you reduce the expected value of your contribution.
We ordinarily avoid putting all our eggs in one basket because of our risk aversion. But from a utilitarian perspective, the relevant risk in the case of charitable giving isn’t to you but to the proceeds, and concentrating your gift, one contribution among a great many, in the most efficient basket doesn’t increase the riskiness of the proceeds. Distributing the risk only matters if you’re a virtue ethicist who wants to reduce the risk to your own contribution, not the risk to the proceeds.
Dividing the project into subprojects doesn’t change the logic.
“You make your most rational estimate of the expectation value of two charities you’re partial to: A, animal shelter: B, bomb shelter. You decide that the expectation value in pure utiles is higher for A.”
But why can’t it be higher still for some combination of support for A and support for B?!
The *only* basis I can see for this continued insistence that the best option must be all of one or all of the other, is the bizarrely naive assumption that the good done by a charity is basically a linear function of the money donated. So every charity has a constant of proportionality expressing increase in utility per increase in donation, and you should give all your money to the charity with the best returns, end of story. As if each charity were a utility factory mass-producing a single good of fixed value at fixed cost.
Now let’s suppose a completely different model for the way that utility scales with size of donation: a staircase function (series of step functions). Each “plateau” on the staircase represents a qualitative increase in the amount of good that the charity is able to deliver, achieved because of the qualitative increase in funding which lets it attain the next level. As you travel up the staircase, you repeatedly encounter “locally diminishing returns”, only to then encounter “locally increasing returns” later on.
If both your charities have a “rate of return” described by a staircase, then it’s very possible that you may not have enough money to drive either charity two levels up its staircase, but you may have enough to make both charities advance by one level on their respective staircases, and that this is the best use of your charitable donation.
Pingback: The Spamlist! » Defense theory of diversified giving
Pingback: Inducing an obligation | weychi.com
Pingback: Inducing an obligation | Govidea | Worldwide Travel , Home Design , ETC