The usual repugnant conclusion:
A world of people living very good lives is always less good than some much larger world of people whose lives are only just worth living.
My variant, in brief:
A world containing a number of people living very good lives is always less good than some much larger, longer lived world of people whose lives contain extremes of good and bad that overall add to life being only just worth living.
The usual repugnant conclusion is considered very counterintuitive, so most people disagree with it. Consequently avoiding the repugnant conclusion is often taken as a strong constraint on what a reasonable population ethics could look like (e.g. see this list of ways to amend population ethics, or chapter 17 onwards of Reasons and Persons ). I asked my readers how crazy they thought it was to accept my variant of the repugnant conclusion, relative to the craziness of accepting the usual one. Below are the results so far.
Most people’s intuitions about my variant were quite different from the usual intuition about the repugnant conclusion, with only 21% considering both conclusions about as crazy. Everyone else who made the comparison found my version much more palatable, with 57% of people claiming it was quite sensible or better. These are the reverse of the usual intuition.
This difference demonstrates that the usual intuition about the repugnant conclusion can’t be so easily generalised to ‘large populations of low value lives shouldn’t add up to a lot of value’, which is what the repugnant conclusion is usually taken to suggest. Such a generalization can’t be made because the intuition does not hold in such situations in general. The usual aversion must be about something other than population and the value in each life. Something that we usually abstract away when talking about the repugnant conclusion.
What could it be? I changed several things in my variant, so here are some hypotheses:
Variance: This is the most obvious change. Perhaps our intuitions are not so sensitive to the overall quality of a life as by the heights of the best bits. It’s not the notion of a low average that’s depressing, it’s losing the hope of a high.
Time: I described my large civilization as lasting much longer than my short one, rather than being larger only in space. This could make a difference: as Robin and I noted recently, people feel more positively about populations spread across time than across space. I originally included this change because I thought my own ill feelings toward the repugnant conclusion seemed to be driven in part by the loss of hope for future development that a large non-thriving population brings to mind, though that should not be part of the thought experiment. So that’s another explanation for the time dimension mattering
Respectability/Status: in my variant, the big world people look like respectable, deserving elites, whereas if you picture the repugnant conclusion scenario as a packed subsistance world, they do not. This could make a difference to how valuable their world seems. Most people seem to care much more about respectable, deserving elites than they do about the average person living a subsistance lifestyle. Enjoying First World wealth without sending a lot of it to poor countries almost requires being pretty unconcerned about people who live near subsistance. Could our aversion to the repugnant conclusion merely be a manifestation of that disregard?
Error: Approximately less than 4% of those who looked at my post voted; perhaps they are strange for some reason. Perhaps most of my readers are in favour of accepting all versions of the repugnant conclusion, unlike other people.
Suppose my results really are representative of most people’s intuitions. Something other than the large population of lives barely worth living makes the repugnant conclusion scenario repugnant. Depending on what it is, we might find that intuition more or less worth overruling. For instance if it is just a disrespect for lowly people, we might prefer to give it up. In the mean time, if the repugnant conclusion is repugnant for some unknown reason which is not that it contains a large number of people with mediocre wellbeing, I think we should refrain from taking it as such a strong constraint on ethics regarding populations and their wellbeing.
I just don’t think the repugnant conclusion follows from any sensible version of utilitarianism. The only form of utilitarianism that gets round the impossibility of interpersonal comparisons of utility is the one based on a veil of ignorance argument, from Harsanyi (usually misattributed to Rawls). Behind the veil I don’t know who I’m going to be born as, so I care about my expected utility, which, with maximum-entropy (i.e. uniform) priors is just the average utility of people in the population.
So we should care about maximising average, not total utility, and prefer small populations of happy people to big populations of miserable ones.
By the central limit theorem, you’d have more success with average utility by having a large population.
Doesn’t follow with standard preferences I think. You don’t care about the standard deviation of the sample average (the thing that the CLT says is changing as population grows). All of your risk aversion is already captured by your utility of getting bundle x as person i in state s. Everything is linear from there up. Even if you have some concern about fairness behind the veil (see http://dx.doi.org/10.1016/j.jet.2011.04.001 ) this doesn’t change, as its still just adding additional curvature to each element of the sum, not wrapping the whole sum.
Perhaps with ambiguity aversion you could get that effect coming into play though. The Mukerji et al form leads precisely to additional curvature wrapping the whole sum.
If you include the probability of not being born as zero utility then average utiliity maximization and total utility maximization are identical (adding a person with positive utility raises both).
Average utility has much stronger problems than the repugnant conclusion because at its logical extreme we get the best world being 1 extremely happy person.
The first point I’d tend to treat as a reductio ad absurdum of the idea of including potential people (particularly as we think utility is unbounded below). Less flippantly, to make the Harsanyi idea work you really have to think that the uncertainty behind the veil is about the properties you will have, not about who you’ll be which is hopelessly vague (if I was conceived one second later would I be a different person?). And existence is not a property. Less flippantly still, for people behind the veil to be able to make decisions about “possible people”, people behind the veil must have preferences about existence versus never having existed (which is not the same as having ceased to exist). Revealed preference can never tell us anything about this, thus even if it makes sense to think about existence as a choice variable behind the veil, we are left with a welfare theory that tells us absolutely nothing about whether high or low populations are preferable. Maybe never having existed is the epitome of the Buddhists’ nirvana?
The second point is just a clash of intuitions I guess. If there’s only one person alive, and the one living person is extremely happy, and I’m alive, then I’d say it should follow that I should be extremely happy. Where intuitions mislead you I expect is that it’s hard to disentangle the idea of only one person being alive from the idea of having killed off everyone else, something which will have unambiguously lowered average utility.
One complication about considering average utility comes up if you decide that the simplest way to make sure everyone is happy is to limit the population. If you make all conscious life on Earth die out (happily) and replace it with one really happy creature, then you’ve increased the average happiness level on Earth, which would be the moral thing to do. But if it turns out that there’s a planet of really unhappy people somewhere else in the Universe, then you’ve simultaneously LOWERED the average happiness level in the Universe, which is immoral. So if you limit the population, you can never know if you’ve acted morally, except within a certain radius.
If you aggregate the average through space and time and beyond species, and consider the past 500 million years of wildlife to be net-negative, or at least worse than a human life barely worth living, both average and total utilitarianism will converge on adding as many human lives barely worth living to the future. Adding just one very happy life does not significantly impact an average base-line of 500 million years of suffering in this model.
Pingback: Is it repugnant? | Meteuphoric
This reminds me of a point I was just making about Nietzsche, that he takes the rejection of the Repugnant Conclusion to its logical extreme: given some valuation of individuals, he values a civilization by the L^\infty norm of its people, not the L^1 norm.
That’s an extreme, but some degree of it may be operating in most people’s intuitions- and a variable life means that some people at least are experiencing highs. Here’s another variant that this principle (and my own intuition) say will sound more palatable than the original Repugnant Conclusion: instead of a small civilization of X people at a high standard of living, we have a large civilization with Y people at subsistence, supporting Z people (randomly selected at birth) at a high standard of living. Say that Y >> X >> Z.
(There’s still some discomfort on the level of fairness, but running it by random selection should mitigate that.)
I suspect that the difference between this scenario and the repugnant conclusion may be due to many people’s inability to view those high-variance lives as less valuable than the ones posited for the small high-value world. I certainly had trouble with it.
Leading lives of meaning, with dizzying highs and terrifying lows, striving in the service of important causes. It seems like the sort of life modern people would see as high status. Such as great artists or generally “movers and shakers”.
Perhaps if you reformulated the comparison as between a small world of ecstatically happy people, and a much much larger world of people who are happy to the same degree, except for much of the day where they are taken into a holding cell and tortured such that their average quality of life is just barely worth living.
Such a scenario would more tightly target variance without bringing in so many confounding elements such as leading lives of self actualization or ones that would make great stories.
You have to examine the assumptions.
Did the world of x people living very good lives evolve from a world of y people (where x is much less than y) where their lives were horrible by KILLING y-x people OR is it a binary choice where one or the other exists independently OR is it a choice between y-x people gradually die off of natural causes and then x people are left living good lives?
I need to know the answer before I can make my choice because to me the order of most favored is this: the larger world of hardship evolving gracefully into the smaller world of plenty followed by the binary choice followed by the get there by killing choice.
It’s only the get there by killing choice I find repugnant.