Category Archives: 1

Mediocre masses are not what’s repugnant

The usual repugnant conclusion:

A world of people living very good lives is always less good than some much larger world of people whose lives are only just worth living.

My variant, in brief:

A world containing a number of people living very good lives is always less good than some much larger, longer lived world of people whose lives contain extremes of good and bad that overall add to life being only just worth living.

The usual repugnant conclusion is considered very counterintuitive, so most people disagree with it. Consequently avoiding the repugnant conclusion is often taken as a strong constraint on what a reasonable population ethics could look like (e.g. see this list of ways to amend population ethics, or chapter 17 onwards of Reasons and Persons ). I asked my readers how crazy they thought it was to accept my variant of the repugnant conclusion, relative to the craziness of accepting the usual one. Below are the results so far.

Results from repugnance poll

Poll results

Most people’s intuitions about my variant were quite different from the usual intuition about the repugnant conclusion, with only 21% considering both conclusions about as crazy. Everyone else who made the comparison found my version much more palatable, with 57% of people claiming it was quite sensible or better. These are the reverse of the usual intuition.

This difference demonstrates that the usual intuition about the repugnant conclusion can’t be so easily generalised to ‘large populations of low value lives shouldn’t add up to a lot of value’, which is what the repugnant conclusion is usually taken to suggest. Such a generalization can’t be made because the intuition does not hold in such situations in general. The usual aversion must be about something other than population and the value in each life. Something that we usually abstract away when talking about the repugnant conclusion.

What could it be? I changed several things in my variant, so here are some hypotheses:

Variance: This is the most obvious change. Perhaps our intuitions are not so sensitive to the overall quality of a life as by the heights of the best bits. It’s not the notion of a low average that’s depressing, it’s losing the hope of a high.

Time: I described my large civilization as lasting much longer than my short one, rather than being larger only in space. This could make a difference: as Robin and I noted recently, people feel more positively about populations spread across time than across space. I originally included this change because I thought my own ill feelings toward the repugnant conclusion seemed to be driven in part by the loss of hope for future development that a large non-thriving population brings to mind, though that should not be part of the thought experiment. So that’s another explanation for the time dimension mattering

Respectability/Status: in my variant, the big world people look like respectable, deserving elites, whereas if you picture the repugnant conclusion scenario as a packed subsistance world, they do not. This could make a difference to how valuable their world seems. Most people seem to care much more about respectable, deserving elites than they do about the average person living a subsistance lifestyle. Enjoying First World wealth without sending a lot of it to poor countries almost requires being pretty unconcerned about people who live near subsistance. Could our aversion to the repugnant conclusion merely be a manifestation of that disregard?

Error: Approximately less than 4% of those who looked at my post voted; perhaps they are strange for some reason. Perhaps most of my readers are in favour of accepting all versions of the repugnant conclusion, unlike other people.

Suppose my results really are representative of most people’s intuitions. Something other than the large population of lives barely worth living makes the repugnant conclusion scenario repugnant. Depending on what it is, we might find that intuition more or less worth overruling. For instance if it is just a disrespect for lowly people, we might prefer to give it up. In the mean time, if the repugnant conclusion is repugnant for some unknown reason which is not that it contains a large number of people with mediocre wellbeing, I think we should refrain from taking it as such a strong constraint on ethics regarding populations and their wellbeing.

Temporal allies and spatial rivals

(This post co-authored by Robin Hanson and Katja Grace.)

In the Battlestar Galactica TV series, religious rituals often repeated the phrase, “All this has happened before, and all this will happen again.” It was apparently comforting to imagine being part of a grand cycle of time. It seems less comforting to say “Similar conflicts happen out there now in distant galaxies.” Why?

Consider two possible civilizations, stretched either across time or space:

  • Time: A mere hundred thousand people live sustainably for a billion generations before finally going extinct.
  • Space: A trillion people spread across a thousand planets live for only a hundred generations, then go extinct.

Even though both civilizations support the same total number of lives, most observers probably find the time-stretched civilization more admirable and morally worthy. It is “sustainable,” and in “harmony” with its environment. The space-stretched civilization, in contrast, seems “aggressively” expanding and risks being an obese “repugnant conclusion” scenario. Why?

Finally, consider that people who think they are smart are often jealous to hear a contemporary described as “very smart,” but are much happier to praise the genius of a Newton, Einstein, etc. We are far less jealous of richer descendants than of richer contemporaries. And there is far more sibling rivalry than rivalry with grandparents or grandkids. Why?

There seems an obvious evolutionary reason – sibling rivalry makes a lot more evolutionary sense. We compete genetically with siblings and contemporaries far more than with grandparents or grandkids. It seems that humans naturally evolved to see their distant descendants and ancestors as allies, while seeing their contemporaries more as competitors. So a time-stretched world seems choc-full of allies, while a space-stretched one seems instead full of potential rivals, making the first world seem far more comforting.

Having identified a common human instinct about what to admire, and a plausible evolutionary origin for it, we now face the hard question: do we embrace this instinct as revealing a deep moral truth, or do we reject it as a morally irrelevant accident of our origins? The two of us (Robin and Katja) are inclined more to reject it, but your mileage may vary.

(This is cross-posted at Overcoming Bias.)

Hidden philosophical progress

Bertrand Russell:

If you ask a mathematician, a mineralogist, a historian, or any other man of learning, what definite body of truths has been ascertained by his science, his answer will last as long as you are willing to listen. But if you put the same question to a philosopher, he will, if he is candid, have to confess that his study has not achieved positive results such as have been achieved by other sciences…this is partly accounted for by the fact that, as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be called philosophy, and becomes a separate science. The whole study of the heavens, which now belongs to astronomy, was once included in philosophy; Newton’s great work was called ‘the mathematical principles of natural philosophy’. Similarly, the study of the human mind, which was a part of philosophy, has now been separated from philosophy and has become the science of psychology. Thus, to a great extent, the uncertainty of philosophy is more apparent than real: those questions which are already capable of definite answers are placed in the sciences, while those only to which, at present, no definite answer can be given, remain to form the residue which is called philosophy.

I often hear this selection effect explanation for the apparently small number of resolved problems that philosophy can boast. I don’t think it necessarily lessens this criticism of philosophy however. It matters whether the methods that were successful at  providing insights in what were to become fields like psychology and astronomy – those which brought definite answers within reach – were methods presently included in philosophy. If they were not, then the fact that the word ‘philosophy’ has come to apply to a smaller set of methods which haven’t been successful does not particularly suggest that such methods will become  successful in that way*. If they were the same methods, then that is more promising.

I don’t know which of these is the case. I also don’t actually know how many resolved problems philosophy has. If you do, feel free to tell me. I start a PhD in philosophy in the Autumn, and haven’t officially studied it before, so I am curious about its merits.

*Note that collecting resolved problems is only one way philosophy might be valuable. Russell points out that philosophy has been productive at making us less certain about things we thought we knew, which is important information.

Is it repugnant?

Derek Parfit‘s ‘Repugnant Conclusion‘ is that for any world of extremely happy and fulfilled people, there is a better world which contains a much larger number of people whose lives are only just worth living. This is a hard to avoid consequence of ethical theories where more of whatever makes life worth living is better. It’s more complicated than that, but population ethicists have had a hard time finding a theory that avoids the repugnant conclusion without implying other crazy seeming things.

Parfit originally pointed out that people whose lives are barely worth living could be living lives of constant very low value, or their lives could have huge highs and lows. He asked us to focus on the first. I’m curious whether normal intuitions differ if we focus on a different form of ‘barely worth living’.

Consider an enormous and very rich civilization. Its members appreciate every detail of their lives very sensitively, and their lives are dramatic. They each regularly experience soaring elation, deep contentment and overpowering sensory pleasure. They are keenly ambitious, and almost always achieve their dreams. Everyone is successful and appreciated, and they are all extremely pleased about that. But these people are also subject to deep depressions, and are easily overcome by fear, rage or jealousy. Sometimes they lie awake at night anguished about their insignificance in the universe and their impending deaths. If they don’t achieve what they hoped they can become overwhelmed by guilt, insecurity, and hurt pride. They soon bounce back, but live is slight fear of those emotions. They also have excruciating migraine headaches when they work too hard. All up, the positives in each person’s action packed life just outweigh the negatives.

Now suppose there is a choice to have a small world of people who only appreciate the pleasures, or a much much larger world like that described above. Perhaps it turns out that the overly pleasured people are unable to be made productive for instance, so we can choose a short future with a large number of people enjoying idle bliss with our saved up resources, or an indefinitely long future with an incredibly much larger number of productive people each enjoying small net positives. How crazy does it seem to prefer the latter at some level of extreme size?

 

I give my interpretation of the results here.

When not to know?

Jeff at Cheap Talk reports on Andrew Caplin’s good point: making tests less informative can make people better off, because often they don’t want that much information, but may still want a bit.

This reminded me of a more basic question: what makes people want to avoid getting information?

That is, when would people prefer to believe P(X)=y% than to have a y% chance of believing X, and a (1-y)% chance of believing not X?

One such time is when thinking about the question at all would be disconcerting. For instance you may prefer whatever probability distribution you already have over the manners in which your parents may make love, than to consider the question.

Another time is when more uncertainty is useful in itself. A big category of this is when it lets you avoid responsibility. As in, ‘I would love to help, but I’m afraid I have no idea how to wash a cat’, or ‘How unfortunate that I had absolutely no idea that my chocolate comes from slaves, or I would have gone to lots of effort to find ethical chocolate’. If you can signal your ignorance, you might also avoid threats this way.

I’m more interested in situations like the one where you could call the doctor to get the results of your test for venereal disease, but you’d just rather not. Knowing would seem to mostly help you do things you would want to do in the case that you do have such a disease, and you are already thinking about the topic. It seems you actually prefer the uncertainty to the knowledge in themselves. The intuitive interpretation seems to be something like ‘you suspect that you do have such a disease, and knowing will make you unhappy, so you prefer not to find out’. But to the extent you suspect that you have the disease, why aren’t you already unhappy? So that doesn’t explain why you would rather definitely be somewhat unhappy than a chance of being unhappier with a chance of relief from your present unhappiness. And it doesn’t distinguish that sort of case from the more common cases where people like to have information.

A few cases where people often seek ignorance:

  • academic test results which are expected to be bad
  • medical test results
  • especially genetic tendencies to disease
  • whether a partner is cheating
  • more?

Notice that these all involve emotionally charged situations – can you think of some that don’t?

Perhaps there aren’t really any cases where people much prefer belief in a y% chance of X over a y% chance of believing X, without external influences such as from other people expecting you to do something about your unethical chocolate habit.

Another theory based on external influences is this. Suppose you currently believe with 50% probability that you have disease X, and that does indeed fill you with 50% dread. However because it isn’t common knowledge, you are generally treated as if the chance were much lower. You are still officially well. If you actually discover that you have the disease, you are expected to tell people, and that will become much more than twice as unpleasant socially. Perhaps even beside the direct social effects, having others around you treat you as officially well makes you feel more confident in your good health.

This makes more sense in the case of a partner cheating. If you actually find out that they are cheating it is more likely to become public knowledge that you know, in which case you will be expected to react and to be humiliated or hurt. This is much worse than being treated as the official holder of a working relationship, regardless of your personal doubts.

This theory seems to predict less preference for ignorance in the academic test case, because until the test comes out students don’t have so much of an assumed status. But this theory predicts that a person who is generally expected to do well on tests will be more averse to finding out than a person who usually does less well, if they have the same expectation of how well they went. It also predicts that if you are already thought to be unwell, or failing school or in a failing marriage, you will usually be particularly keen to get more information. It can only improve your official status, even if your private appraisal is already hopeful in proportion to the information you expect to receive.

I have not much idea if this theory is right. What are other cases where people don’t want more information, all things equal? Does social perception play much part? What are other theories?