Intuitions and utilitarianism

Bryan Caplan:

When backed into a corner, most hard-line utilitarians concede that the standard counter-examples seem extremely persuasive.  They know they’re supposed to think that pushing one fat man in front of a trolley to save five skinny kids ismorally obligatory.  But the opposite moral intuition in their heads refuses to shut up.

Why can’t even utilitarians fully embrace their own theory? 

He raises this question to argue that ‘there was evolutionary pressure to avoid activities such as pushing people in front of trolleys’ is not an adequate debunking explanation of the moral intuition, since there was also plenty of evolutionary pressure to like not dying, and other things that we generally think of as legitimately good. 

I agree that one can’t easily explain away the intuition that it is bad to push fat men in front of trolleys with evolution, since evolution is presumably largely responsible for all intuitions, and I endorse intuitions that exist solely because of evolutionary pressures. 

Bryan’s original question doesn’t seem so hard to answer though. I don’t know about other utilitarian-leaning people, but while my intuitions do say something like:

‘It is very bad to push the fat man in front of the train, and I don’t want to do it’

They also say something like:

‘It is extremely important to save those five skinny kids! We must find a way!’

So while ‘the opposite intuition refuses to shut up’, if the so-called counterexample is persuasive, it is not in the sense that my intuitions agree that one should not push the fat man, and my moral stance insists on the opposite. My moral intuitions are on both sides.

Given that I have conflicting intuitions, it seems that any account would conflict with some intuitions. So seeing that utilitarianism conflicts with some intuitions here does not seem like much of a mark against utilitarianism. 

The closest an account might get to not conflicting with any intuitions would be if it said ‘pushing the fat man is terrible, and not saving the kids is terrible too. I will weigh up how terrible each is and choose the least bad option’. Which is what utilitarianism does. An account could probably concord more with these intuitions than utilitarianism does, if it weighed up the strength of the two intuitions instead of weighing up the number of people involved. 

I’m not presently opposed to an account like that I think, but first it would need to take into account some other intuitions I have, some of which are much stronger than the above intuitions: 

  • Five is five times larger than one
  • People’s lives are in expectation worth roughly the same amount as one another, all else equal
  • Youth and girth are not very relevant to the value of life (maybe worth a factor of two, for difference in life expectancy)
  • I will be held responsible if I kill anyone, and this will be extremely bad for me
  • People often underestimate how good for the world it would be if they did a thing that would be very bad for them.
  • I am probably like other people in a given way, in in expectation
  • I should try to make the future better
  • Doing a thing and failing to stop the thing have very similar effects on the future.
  • etc.

So in the end, this would end up much like utilitarianism.

Do others just have different moral intuitions? Is there anything wrong with this account of utilitarians not ‘fully embracing’ their own theory, and nonetheless having a good, and highly intuitive, theory?

34 responses to “Intuitions and utilitarianism

  1. You omit the actual “intuition” (rejected by utilitarianism) that, even with identical consequences, wrong acts of commission are worse than acts of omission.

    If you think this “intuition” isn’t robust, you should ask why almost all criminal justice and civil law systems accept it. (Criminal law expresses, at its core, moral “intuitions.”)

    “Intuitions” appears in scare quotes because they would better be termed fantasies. What are they intuitions of? Intuitions about reality must be rendered consistent, but “intuitions” of morality are morality’s very substance, and if they’re inconsistent, it’s because we have contradictory moral concepts. By ignoring the claims of conflicting intuitions, you can only pretend to render them consistent.

  2. Yes. A utilitarian should always think strategically, and strategy is extremely complicated, with lots of uncertainty and considerations pulling in multiple directions. Part of effective strategy is knowing when you don’t have enough information/time to optimize fully (indeed, that’s pretty much always), and so you need to use shortcuts and heuristics. Which will sometimes conflict. So a good utilitarian will have conflicting intuitions about some situations, as different usually reliable strategies are pointing in different directions. Since I do in fact have conflicting intuitions about many moral situations (as you apparently do as well), I find a moral theory that explains why my intuitions should be expected to be messy far more satisfying than a theory that gives clear answers every time. The clear answers just aren’t believable.

  3. Caplan’s post feels like a straw man. Nevertheless it’s nice to see such a sensible defense of utilitarianism.

  4. Nice post! I feel similarly to you. My intuition against pushing the person feels to me a lot like my intuition against amputating a person’s limb if it was gangrenous. My other intuitions (about saving the person’s life) are in favour of it; I would hope I could do it; if it was someone else in the situation I would think they should do it. But I feel a really strong emotional reaction against it just thinking about it (based on hating the idea of causing pain), and I think it’s unlikely I’d be able to do it.

  5. “People often underestimate how good for the world it would be if they did a thing that would be very bad for them.”

    This is a very interesting idea…

    My first guess is that if I asked some superhuman intelligence: “Tell me the best thing I could realistically do for the world, ignoring the negative value of suffering that it would cause to me (but do consider how my suffering could reduce my ability to follow the plan, etc.)”, there could be some very impressive plans. On the other hand, if I tried to devise such plan myself, with all my limited intelligence and biases, the real-life results would probably not be very impressive; similarly, if another human tried to make this plan for me.

    But it feels like this area is worth explaining; at least for an interesting debate. Imagine we put some limit on suffering: e.g. no permanent health damage, and the project will only take one year. What is the greatest amount of good you could do?

    • If you’re reasonably young and healthy and live in the United States, you can arrange for the charity of your choice to receive roughly 200 times your net worth by buying the largest term life insurance policy you can and killing yourself two years later. (U.S. law requires that life insurance pay out in the event of suicide if the policy is at least two years old.) Whether this increases global utility is left as an exercise for the reader. (After all, people who are young and have assets are also often ones with high earning potential over a lifetime.)

      I don’t think we need to discuss the ethics of getting life insurance on someone else and murdering them; if you’re the one who paid the premiums and get caught having committed the murder, there’s a good chance that the law won’t let the charity keep the insurance money.

      • Hedonic Treader

        This can’t be done by many people, either. Otherwise life insurance would become more expensive, or the laws would eventually change.

        In general, stealing for charity requires really effective charities. It’s not like you produce extra productive output by killing yourself (you may save costs though).

  6. I think many utilitarians really would push the fat man, even if they claim otherwise. Saying so is low status, people get to act outraged with you if you endorse violation of cultural Schelling points like murder.

    Personally, I would push the fat man. (Provided that I won’t be arrested or something like that – I’m generally selfish before utilitarian.)

    • You probably don’t need to guess about choices people make, as the subject apparently has been studied empirically. (Here’s a brief popular review of some of this research: http://tinyurl.com/ofwz5y5 )

      The interesting finding is that you’re right that pushing the fat man is the typical choice; but you’re wrong that this is because murder is a Schelling point. When the problem is changed so that you need only throw a switch to reroute the train, most people say they’d do it. (It’s still murder.)

      This supports my construal-level theory analysis, in that the locus of action, being distant from the switch thrower, elicits far mode. (I wouldn’t throw the switch. To me, such an act of arrogance requires far more than a mere five lives. [The 24 series played on our uncertainty about when utilitarianism kicks in.)

      My CLT analysis, it turns out, isn’t as novel as I had thought. The author of the NY Times article linked above expresses the same basic idea:

      A cool utilitarian calculus has its place, and so do our subrational instinctive juices. If either were missing, we would make some truly terrible choices.

      [Which is the higher status answer? I had the opportunity to ask a couple very low status young people what they would do. The woman said that she wouldn’t do anything unless she was personally acquainted with someone among the five; the man thought the problem was about whether very obese people deserved to die. He decided it wasn’t worth killing him because very obese people have low life expectancies anyway.]

  7. Alexander Stanislaw

    The fat man challenge is such an easy one.

    Here is a much harder one. If you are a utilitarian would you support a human farming industry? The medical advances and benefits from having a store of organs and test subjects would be enormous. Each farmed human would provide at least one life worth of utils. We would be suddenly be able to conduct randomized controlled trials on all the things we can’t right now because free people have rights. The utilitarian credentials of this world seem solid to me.

    • Hedonic Treader

      @Alexander: The utilitarian credentials aren’t solid, and that you think they are is an indicator that the solidity of utilitarian credentials to violate personal rights is easily overestimated, which is harmful from within utilitarianism.

      Here are my 2 primary reasons why the utilitarian credentials are far worse than you think:

      1. Humans are master rationalizers, hypocrites and violent apes by nature. This is not an insult, but an evolutionary fact. Giving them excuses to use aggression for personal benefit will lead to more aggression for personal benefit, not to the greater good. Slavery was real, and clearly net-bad. Genocides are real, and clearly net-bad. Yet there is a long history of pseudo-utilitarian rationalizations for both (and more rights violations).

      2. The good thing about humans is that they can consent. If you want organ donations, legalize voluntary organ markets. If you want dangerous human experiments, legalize voluntary markets for dangerous human experiments. When governments ban a thing out of paternalism (usually with pseudo-utilitarian arguments) and people like you suggest making the same thing involuntary (also with pseudo-utilitairan arguments), then not only must one side be wrong, but it is possible for both sides to be wrong, and voluntary markets are a reasonable alternative. (Politics, of course, doesn’t work that way.)

      • Alexander Stanislaw

        @Hedonic Treader

        If I understand your objection, you claim that this world is not good under utilitarianism because consent is being violated? But utilitarianism is not a consent maximizing ethical theory, it is a utility maximizing ethical theory. And I contend that the human farming world both decreases consent and increases utility.

        For each farmed human at least one life is saved (due to organ donation). And in the long run, many more lives are saved due to advances in biomedical science. Their consent is violated, but there is an overall net gain in utility. If you dispute this then I’d be interested to hear why.

        • Hedonic Treader

          Yes, I dispute this, but that was not the point about consent. The point about consent was that it creates an alternative option, which is voluntary markets.

          Before you enslave anybody to take their organs etc., first get government to unban the voluntary sale of organs. And so on.

          The reason why I dispute slavery/human farming leads to an overall net gain in utility is that it has severely negative political externalities (human rights are an important Schelling point whose erosion implies massive potential harm), that its benefits are dubious compared to the suffering caused, and as pointed out above, better alternatives are yet unexplored.

          • Alexander Stanislaw

            What might that harm be exactly? If its harm to farmed humans I’m not seeing the problem.

            Organ donation markets do not address the biomedical research aspect. (and its not obvious that they would completely satisfy the demand for organs in the way that human farms would).

            • Hedonic Treader

              “What might that harm be exactly?”

              If you don’t see the potential harm in eroding human rights as a Schelling point, you have a naive understanding of human history. Just today I read about the laws that governed black slavery in the US in the 17th century and I was shocked how openly racist and cruel they were. There is no reason why the same (utilitarian) goals could not have been achieved by simply employing voluntary immigrants and workers. The only reason to use nonconsensual slavery was parasitic exploitation, that is, create value for some by inflicting greater disvalue on others by means of force. Yet up to this day you meet people who simply pretend like utilitarianism doesn’t have to be opposed to slavery because someone benefits.

              “If its harm to farmed humans I’m not seeing the problem.”

              Well, their suffering would surely count as a negative? (not that I want to go into a “logic of the larder” discussion here)

              “Organ donation markets do not address the biomedical research aspect.”

              But I already mentioned markets for subjects in such research. If it is high risk, you have to pay well.

              “(and its not obvious that they would completely satisfy the demand for organs in the way that human farms would).”

              Supply and demand. If people aren’t willing to pay the price at which other people are willing to sell organs, then maybe those organs should not be sold in the first place.

              I stipulate that there is practically no utilitarian benefit that slavery can achieve that can’t be achieved by genuine consensualism. In the current reality, every utilitarian should be libertarian-leaning for this reason.

          • Check your estimate of the strength of the Schelling-point argument against the actual facts of slavery, whether in the U.S. South or (say) Ancient Greece. Did slavery produce a total moral collapse due to lack of ethical bright lines?

            • Hedonic Treader

              To clarify, I’m not concerned about total moral collapse. The suffering caused by slavery, or genocide, or legal routine torture is the massive harm I refer to. This is what human rights are supposed to prevent, as a political and legal tool.

              When (self-declared) utilitarians suggest violating those rights on a large scale, I want to know why. The realistic answer is typically something like a few lives will be saved (without considering alternatives), or some money will be saved (without putting a proper price on the suffering caused or political externalities).

        • Consent is relevant to maximizing utility in general because people want things to be good (at least for themselves), so allowing them to direct their own behavior is a decent mechanism for making things better.

          Also, most ways to improve things should be agreeable to all involved parties, if transfers are possible, modulo differences in ability to pay. Given that transfers are often possible, and people often have comparable resources, having to force an ‘improvement’ on people is a decent sign that it is not in fact an improvement. In this organ case, if it is really worth farming people for organs, we might expect the people using the organs to be willing to pay enough for them that the farmed people would be happy to be alive.

          Further, markets mean that people who are especially happy to sell their organs can, while those who really don’t want to can abstain – this adds further value over blanket enforcement.

          • Also, most ways to improve things should be agreeable to all involved parties, if transfers are possible, modulo differences in ability to pay.

            Not if you’re a utilitarian, who thinks future lives count. You’re missing the point of the hypothetical, which is (or I take to be) that radically nonconsensual practices are justified (under utilitarianism) when they benefit enough future persons.

      • Alexander Stanislaw

        Oh regarding 1, that is a feature not a bug. Human farming is bad for farmed humans, and good for non-farmed humans. The good outweighs the bad if measured in utility (For each farmed human, many more non-farmed humans benefit and they benefit greatly).

  8. @Alexander: I think the human farm challenge does not really address the same ethical problem as the trolley problem. The fat man question is designed to make stark the contrast between consequentialism (“the right action is the one with the best consequences, no constraints”) and deontological constraints on action (“you cannot murder”). If I am a bystander observing the whole situation I can hope/wish that the fat man is pushed so that more people live (i.e. I can believe the *world-state* with five survivors is the better outcome) even if I am a deontologist and think the *action* of pushing him is wrong.

    In the case of the human farms, however, myself and other people with normal anti-farming intuitions feel that the world with human farms is a worse place than the world without them, not just that establishing them violates constraints on action. This means that the disagreement between “farmers” and “anti-farmers” does not map to “consequentialism vs deontology”, as the trolley problem does, but to different consequentialisms with different conceptions of the good (say, one focusing narrowly on aggregate total human health, and one taking into account other more complex factors).

    • Alexander Stanislaw

      Why do you think the world is worse? For each farmed human, many lives are saved and biomedical advances will result in even more lives being saved. – 1 life, + many lives is a good thing under utilitarianism.

  9. Hedonic Treader

    @Katja

    “People often underestimate how good for the world it would be if they did a thing that would be very bad for them.”

    Katja, you don’t usually respond when I ask you direct questions, but in this case, I challenge you to give reasonable, realistic examples for this claim.

    • Since I claimed that this was an intuition I have, not a fact about the world which I consider to be well evidenced, it wouldn’t seem very damning if I didn’t have such an example.

      The kind of thing I have in mind is, if you have the choice to live with your parents, and you’d really rather not, but it would save a lot of money, which you could then give to a good charity, then I expect you to overestimate the extent to which this will just be bad for the world, since it will reduce your productivity, and worsen your ability to do useful networking etc etc.

      I meant it as an instance of the general tendency of people to rationalize in their own favor.

      • Hedonic Treader

        Thanks for the response. I guess I was hung up on the “very bad for them” part. I could see this for some people in special circumstances, such as whistleblowers or other the people who tried to kill Hitler.

        For normal people in normal situations, very large exchange rates in altruism vs. egoism seem plausible only within limited margins, or if we accept narratives about long-term flow-through effects like Brian’s fermi calculation in this piece. I guess decades of earning to give could count as “very bad for them” and also be very good for the world. Unless the narratives about charity effectiveness turn out to be false.

  10. Our acts are near; their consequences are far. Humans are deontological in near mode; they’re consequentialist in far mode. Both forms of moral realism reify one pole or the other.

    Bryan Caplan is the opposite kind of moral realist, but he makes the sound point that (although deontologists do to) utilitarians ignore (some of) their moral sense.

  11. Humans are causal reasoners and causal agents, and our moral intuitions are grounded in things like:

    * Preventing other members of the group from causing harm.
    * Avoiding doing things that have the appearance to other of causing harm

    etc.

    By definition your degree of agency in things that you visibly do (that is, cause to happen) is clear. While conversely there is always epistemic doubt, for yourself and critically for others, about anything you might have done but don’t. First because we can only imagine, not see, the outcomes of acts not taken, and second, because the space of potential acts and outcomes is in reality (unlike the thought experiment) generally huge, so where we do choose to explore it our imaginations have to do a lot of work, and have every reason not to come into agreement with one another.

    In a world that more closely resembles our Environment of Evolutionary Adaptedness than the thought experiment, maybe your attempt to stop the train still kills the fat man but doesn’t save the others (e.g. they get squashed). And maybe the fat man’s grieving relatives suspect you of having a grudge that made you keen to kill him, separately from your “noble actions”.

    Whereas if you don’t act, the “moral doubt” runs in your favour. Maybe acting would have been bad, but also, perhaps you didn’t see the problem or deduce the solution in time. This speaks to your competence, but not to your motivations, and “morality” per se is about policing people’s inferred motivations (the goals they want to cause) first, and competence (their ability to cause them) second, for reasons that should be obvious if you think about it.

    So in a sense humans are instinctively a kind of naieve utilitarian, for problems that are salient enough (“near mode”, perhaps), but calculating over goals as others infer them from the signalling implicit in our actions, while outcomes are secondary. This is the genuinely evolutionary source of our very strong commission/omission bias that Stephen Diamond mentions above, and why e.g. attempted murder is as a rule a more serious crime than negligent manslaughter in most cultures.

  12. People don’t think the trolley problem all the way through. The consequences go beyond the immediate deaths:

    Don’t push the fat man: People become more aware of train safety and take more care around trains. Laws might be passed to reduce to risk of future deaths,
    Push the fat man: Everybody’s quality of life is reduced as they have to constantly watch their back to avoid being murdered by utilitarians.

    Acts of commission *are* worse than acts of omission. Any ethical system that doesn’t acknowledge this destroys trust in society. It’s extremely rare for humans to sacrifice themselves for strangers. If you attempt to force it on people all you accomplish is increasing paranoia as everybody works to avoid being the victim. This causes great harm, both directly, and indirectly by reducing market efficiency etc.

    The steelman version say the murder is kept 100% secret, but in practice people will figure it out if people keep mysteriously dying in circumstances for the “greater good”.

    • One way of asking the trolley problem that I haven’t heard (not that my knowledge is comprehensive): which act would cause you more guilt? It may not be safe to say without empirical evidence, but it seems to me that most people would feel far more guilt for pushing the fat guy than for refraining. (A diehard moral realist would say this guilt is “irrational,” but, of course, all guilt is, in this same sense, “irrational”.)

      What we feel guilt about may express our moral sensibility (“intuitions”) most directly. Another deep expression is what we condemn. Who do you think will hate you more, the family of the fat guy or the sum of the family members whose kin you refrained from saving?

      If I look at the trolley problem as a personal challenge, I analyze it in terms of virtue ethics. To assume the right to make the decision to save the 5 is an act of supreme arrogance. (Our moral instincts are honed to condemn arrogance because of our evolution under egalitarian cultures.) I conclude I would let nature take its course because I don’t want to become the kind of person who is an officious, moralizing do-gooder. (A selfish motivation.)

      I’m curious whether Katja would sleep well after pushing the fat guy. (She would probably say this is irrelevant to ethical theory–but that’s to do the utilitarian equivocation dance on whether we’re talking about morality or psychology.)

      • I expect I wouldn’t sleep well, but I would sleep worse refraining. I’m not sure what you are talking about re utilitarian equivocation dances – I can think morality is a psychological matter without being obliged to tie my actions to particular anticipated biological responses.

        • You’re saying that you can conclude that humans quasi-universally find something immoral yet conclude that you ought to do it?

          Are you doing psychology or moral philosophy when you introspect your moral “intuitions”?

          • Not saying that – not sure what you would mean by ‘immoral’ then, if different from ‘ought’. But what keeps me awake at night is quite different from what I consider immoral. e.g. transgressing social norms might also disrupt my sleep, as would e.g. thinking about something interesting, or my pillow being uncomfortable.

            By ‘intuitions’ I mean fairly basic feelings about what is true – they don’t come with tags saying they are philosophy or psychology, so that is a further question that I don’t immediately know the answer to.

            • not sure what you would mean by ‘immoral’ then, if different from ‘ought’.

              I said people find it immoral, not that it necessarily is. If you’re a moral realist, you could consistently believe that our intuitions about moral reality are wrong. (You could resort to sacred books or putative reason.)

              But what keeps me awake at night is quite different from what I consider immoral.

              If you have insomnia due to guilt, it must be (this is an empirical claim) because you think (perhaps unconsciously) you did something immoral. You might restrict your moral intuitions to what seems true to your conscious mind, but I would reply that you are limiting your intuitions to your far-mode beliefs. If you’re doing psychology, this will seriously mislead you; if you’re doing philosophy, you need some justification for why far-mode thought is the window to moral reality.

  13. Well. If you can modify utilitarianism endlessly and still call it utilitarianism. the then you can rescue utilitarianism, or or at least “utilitarianism”.

Leave a reply to Stephen R. Diamond Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.