Tag Archives: practical advice

One-on-one charity

People care less about large groups of people than individuals, per capita and often in total. People also care more when they are one of very few people who could act, not part of a large group. In many large scale problems, both of these effects combine. For instance climate change is being caused by a vast number of people and will affect a vast number of people. Many poor people could do with help from any of many rich people. Each rich person sees themselves as one of a huge number who could help that mass ‘the poor’.

One strategy a charity could use when both of these problems are present at once is to pair its potential donors and donees one-to-one. They could for instance promise the family of 109 Seventeenth St. that a particular destitute girl is their own personal poor person, and they will not be bothered again (by that organisation) about any other poor people, and that this person will not receive help from anyone else (via that organisation). This would remove both of the aforementioned problems.

If they did this, I think potential donors would feel more concerned about their poor person than they previously felt about the whole bunch of them. I also think they would feel emotionally blackmailed and angry. I expect the latter effects would dominate their reactions. If you agree with my expectations, an interesting question is why it would be considered unfriendly behaviour on the part of the charity. If you don’t, an interesting question is why charities don’t do something like this.

Taking chances with dinner

Splitting up restaurant bills is annoying.

Good friends often avoid this cost by one of them paying for both one time and the other doing it next time, or better yet, by not keeping track of whose turn it is and it evening out in the long term.

Coins before Euro - European Coins In Circulation

Image via Wikipedia

It’s harder to do this with lesser friends and non-friends who one doesn’t anticipate many meals with because one expects to be exploited by a continual stream of free-riders who never offer to pay, or to have to always pay to show everyone that you are not one of those free-riders, or some other annoying equilibrium.

There is an easy way around this. Flip a coin. Whoever loses pays the whole bill.

Why don’t people do this?

Here are some possible reasons, partly inspired by conversations with friends:

 They don’t think of it

Coins have been around a long time.

It’s hard to have a coin that both people agree is random

One person flips and the other calls it?

They are risk averse

Meals are a relatively small cost that people pay extremely often. They should expect a pretty fair distribution in the long run. If the concern is having to pay for fifty people at once when your income is not huge, either restrict the practice to smaller groups or keep the option of opting out open.

Using a randomising method such as a coin displays distrust, which is rude, but not using one would be costly because you don’t actually trust people

A coin could also display your own intention to be fair. And it doesn’t seem like such a big signal of distrust – I would not be offended if someone offered this deal.

Buying meals for others is a friendly and meaningful gesture – being forced to do it upon losing a bet sullies that ideal somehow

Maybe – I don’t know how this would work

Asking makes you look weird

This is an all purpose reason for not doing anything differently. But sometimes people do change social norms – what was special about those times?

Sharing in the bill feels like contributing to something alongside others, which is a better feeling than paying all of it against your will, or than not contributing at all.

Maybe – I feel pretty indifferent about the whole emotional experience personally.

There are many inconvenient small payments that seem like they could be improved by paying a larger amount occasionally with some small probability. Yet I haven’t seen such a method put to use anywhere.

Signaling for a cause

Suppose you have come to agree with an outlandish seeming cause, and wish to promote it. Should you:

a) Join the cause with gusto, affiliating with its other members, wearing its T-shirts, working on its projects, speaking its lingo, taking up the culture and other causes of its followers

b) Be as ordinary as you can in every way, apart from speaking and acting in favour of the cause in a modest fashion

c) Don’t even mention that you support the cause. Engage its supporters in serious debate.

If you saw that a cause had another radical follower, another ordinary person with sympathies for it, or another skeptic who thought it worth engaging, which of these would make you more likely to look into their claims?

What do people usually do when they come to accept a radical cause?

Matching game

Have you have read the overview of this blog? If so, I would be pleased if you would tell me which of the following styles of thought you think closest to that manifested in it:

Estimation is the best we have

This argument seems common to many debates:

‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.

This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.

Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.

A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’

In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.

I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely.  Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.

Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.

Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?