Temporal allies and spatial rivals

(This post co-authored by Robin Hanson and Katja Grace.)

In the Battlestar Galactica TV series, religious rituals often repeated the phrase, “All this has happened before, and all this will happen again.” It was apparently comforting to imagine being part of a grand cycle of time. It seems less comforting to say “Similar conflicts happen out there now in distant galaxies.” Why?

Consider two possible civilizations, stretched either across time or space:

  • Time: A mere hundred thousand people live sustainably for a billion generations before finally going extinct.
  • Space: A trillion people spread across a thousand planets live for only a hundred generations, then go extinct.

Even though both civilizations support the same total number of lives, most observers probably find the time-stretched civilization more admirable and morally worthy. It is “sustainable,” and in “harmony” with its environment. The space-stretched civilization, in contrast, seems “aggressively” expanding and risks being an obese “repugnant conclusion” scenario. Why?

Finally, consider that people who think they are smart are often jealous to hear a contemporary described as “very smart,” but are much happier to praise the genius of a Newton, Einstein, etc. We are far less jealous of richer descendants than of richer contemporaries. And there is far more sibling rivalry than rivalry with grandparents or grandkids. Why?

There seems an obvious evolutionary reason – sibling rivalry makes a lot more evolutionary sense. We compete genetically with siblings and contemporaries far more than with grandparents or grandkids. It seems that humans naturally evolved to see their distant descendants and ancestors as allies, while seeing their contemporaries more as competitors. So a time-stretched world seems choc-full of allies, while a space-stretched one seems instead full of potential rivals, making the first world seem far more comforting.

Having identified a common human instinct about what to admire, and a plausible evolutionary origin for it, we now face the hard question: do we embrace this instinct as revealing a deep moral truth, or do we reject it as a morally irrelevant accident of our origins? The two of us (Robin and Katja) are inclined more to reject it, but your mileage may vary.

(This is cross-posted at Overcoming Bias.)

Hidden philosophical progress

Bertrand Russell:

If you ask a mathematician, a mineralogist, a historian, or any other man of learning, what definite body of truths has been ascertained by his science, his answer will last as long as you are willing to listen. But if you put the same question to a philosopher, he will, if he is candid, have to confess that his study has not achieved positive results such as have been achieved by other sciences…this is partly accounted for by the fact that, as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be called philosophy, and becomes a separate science. The whole study of the heavens, which now belongs to astronomy, was once included in philosophy; Newton’s great work was called ‘the mathematical principles of natural philosophy’. Similarly, the study of the human mind, which was a part of philosophy, has now been separated from philosophy and has become the science of psychology. Thus, to a great extent, the uncertainty of philosophy is more apparent than real: those questions which are already capable of definite answers are placed in the sciences, while those only to which, at present, no definite answer can be given, remain to form the residue which is called philosophy.

I often hear this selection effect explanation for the apparently small number of resolved problems that philosophy can boast. I don’t think it necessarily lessens this criticism of philosophy however. It matters whether the methods that were successful at  providing insights in what were to become fields like psychology and astronomy – those which brought definite answers within reach – were methods presently included in philosophy. If they were not, then the fact that the word ‘philosophy’ has come to apply to a smaller set of methods which haven’t been successful does not particularly suggest that such methods will become  successful in that way*. If they were the same methods, then that is more promising.

I don’t know which of these is the case. I also don’t actually know how many resolved problems philosophy has. If you do, feel free to tell me. I start a PhD in philosophy in the Autumn, and haven’t officially studied it before, so I am curious about its merits.

*Note that collecting resolved problems is only one way philosophy might be valuable. Russell points out that philosophy has been productive at making us less certain about things we thought we knew, which is important information.

Is it repugnant?

Derek Parfit‘s ‘Repugnant Conclusion‘ is that for any world of extremely happy and fulfilled people, there is a better world which contains a much larger number of people whose lives are only just worth living. This is a hard to avoid consequence of ethical theories where more of whatever makes life worth living is better. It’s more complicated than that, but population ethicists have had a hard time finding a theory that avoids the repugnant conclusion without implying other crazy seeming things.

Parfit originally pointed out that people whose lives are barely worth living could be living lives of constant very low value, or their lives could have huge highs and lows. He asked us to focus on the first. I’m curious whether normal intuitions differ if we focus on a different form of ‘barely worth living’.

Consider an enormous and very rich civilization. Its members appreciate every detail of their lives very sensitively, and their lives are dramatic. They each regularly experience soaring elation, deep contentment and overpowering sensory pleasure. They are keenly ambitious, and almost always achieve their dreams. Everyone is successful and appreciated, and they are all extremely pleased about that. But these people are also subject to deep depressions, and are easily overcome by fear, rage or jealousy. Sometimes they lie awake at night anguished about their insignificance in the universe and their impending deaths. If they don’t achieve what they hoped they can become overwhelmed by guilt, insecurity, and hurt pride. They soon bounce back, but live is slight fear of those emotions. They also have excruciating migraine headaches when they work too hard. All up, the positives in each person’s action packed life just outweigh the negatives.

Now suppose there is a choice to have a small world of people who only appreciate the pleasures, or a much much larger world like that described above. Perhaps it turns out that the overly pleasured people are unable to be made productive for instance, so we can choose a short future with a large number of people enjoying idle bliss with our saved up resources, or an indefinitely long future with an incredibly much larger number of productive people each enjoying small net positives. How crazy does it seem to prefer the latter at some level of extreme size?

 

I give my interpretation of the results here.

When not to know?

Jeff at Cheap Talk reports on Andrew Caplin’s good point: making tests less informative can make people better off, because often they don’t want that much information, but may still want a bit.

This reminded me of a more basic question: what makes people want to avoid getting information?

That is, when would people prefer to believe P(X)=y% than to have a y% chance of believing X, and a (1-y)% chance of believing not X?

One such time is when thinking about the question at all would be disconcerting. For instance you may prefer whatever probability distribution you already have over the manners in which your parents may make love, than to consider the question.

Another time is when more uncertainty is useful in itself. A big category of this is when it lets you avoid responsibility. As in, ‘I would love to help, but I’m afraid I have no idea how to wash a cat’, or ‘How unfortunate that I had absolutely no idea that my chocolate comes from slaves, or I would have gone to lots of effort to find ethical chocolate’. If you can signal your ignorance, you might also avoid threats this way.

I’m more interested in situations like the one where you could call the doctor to get the results of your test for venereal disease, but you’d just rather not. Knowing would seem to mostly help you do things you would want to do in the case that you do have such a disease, and you are already thinking about the topic. It seems you actually prefer the uncertainty to the knowledge in themselves. The intuitive interpretation seems to be something like ‘you suspect that you do have such a disease, and knowing will make you unhappy, so you prefer not to find out’. But to the extent you suspect that you have the disease, why aren’t you already unhappy? So that doesn’t explain why you would rather definitely be somewhat unhappy than a chance of being unhappier with a chance of relief from your present unhappiness. And it doesn’t distinguish that sort of case from the more common cases where people like to have information.

A few cases where people often seek ignorance:

  • academic test results which are expected to be bad
  • medical test results
  • especially genetic tendencies to disease
  • whether a partner is cheating
  • more?

Notice that these all involve emotionally charged situations – can you think of some that don’t?

Perhaps there aren’t really any cases where people much prefer belief in a y% chance of X over a y% chance of believing X, without external influences such as from other people expecting you to do something about your unethical chocolate habit.

Another theory based on external influences is this. Suppose you currently believe with 50% probability that you have disease X, and that does indeed fill you with 50% dread. However because it isn’t common knowledge, you are generally treated as if the chance were much lower. You are still officially well. If you actually discover that you have the disease, you are expected to tell people, and that will become much more than twice as unpleasant socially. Perhaps even beside the direct social effects, having others around you treat you as officially well makes you feel more confident in your good health.

This makes more sense in the case of a partner cheating. If you actually find out that they are cheating it is more likely to become public knowledge that you know, in which case you will be expected to react and to be humiliated or hurt. This is much worse than being treated as the official holder of a working relationship, regardless of your personal doubts.

This theory seems to predict less preference for ignorance in the academic test case, because until the test comes out students don’t have so much of an assumed status. But this theory predicts that a person who is generally expected to do well on tests will be more averse to finding out than a person who usually does less well, if they have the same expectation of how well they went. It also predicts that if you are already thought to be unwell, or failing school or in a failing marriage, you will usually be particularly keen to get more information. It can only improve your official status, even if your private appraisal is already hopeful in proportion to the information you expect to receive.

I have not much idea if this theory is right. What are other cases where people don’t want more information, all things equal? Does social perception play much part? What are other theories?

Signaling for a cause

Suppose you have come to agree with an outlandish seeming cause, and wish to promote it. Should you:

a) Join the cause with gusto, affiliating with its other members, wearing its T-shirts, working on its projects, speaking its lingo, taking up the culture and other causes of its followers

b) Be as ordinary as you can in every way, apart from speaking and acting in favour of the cause in a modest fashion

c) Don’t even mention that you support the cause. Engage its supporters in serious debate.

If you saw that a cause had another radical follower, another ordinary person with sympathies for it, or another skeptic who thought it worth engaging, which of these would make you more likely to look into their claims?

What do people usually do when they come to accept a radical cause?