When not to know?

Jeff at Cheap Talk reports on Andrew Caplin’s good point: making tests less informative can make people better off, because often they don’t want that much information, but may still want a bit.

This reminded me of a more basic question: what makes people want to avoid getting information?

That is, when would people prefer to believe P(X)=y% than to have a y% chance of believing X, and a (1-y)% chance of believing not X?

One such time is when thinking about the question at all would be disconcerting. For instance you may prefer whatever probability distribution you already have over the manners in which your parents may make love, than to consider the question.

Another time is when more uncertainty is useful in itself. A big category of this is when it lets you avoid responsibility. As in, ‘I would love to help, but I’m afraid I have no idea how to wash a cat’, or ‘How unfortunate that I had absolutely no idea that my chocolate comes from slaves, or I would have gone to lots of effort to find ethical chocolate’. If you can signal your ignorance, you might also avoid threats this way.

I’m more interested in situations like the one where you could call the doctor to get the results of your test for venereal disease, but you’d just rather not. Knowing would seem to mostly help you do things you would want to do in the case that you do have such a disease, and you are already thinking about the topic. It seems you actually prefer the uncertainty to the knowledge in themselves. The intuitive interpretation seems to be something like ‘you suspect that you do have such a disease, and knowing will make you unhappy, so you prefer not to find out’. But to the extent you suspect that you have the disease, why aren’t you already unhappy? So that doesn’t explain why you would rather definitely be somewhat unhappy than a chance of being unhappier with a chance of relief from your present unhappiness. And it doesn’t distinguish that sort of case from the more common cases where people like to have information.

A few cases where people often seek ignorance:

  • academic test results which are expected to be bad
  • medical test results
  • especially genetic tendencies to disease
  • whether a partner is cheating
  • more?

Notice that these all involve emotionally charged situations – can you think of some that don’t?

Perhaps there aren’t really any cases where people much prefer belief in a y% chance of X over a y% chance of believing X, without external influences such as from other people expecting you to do something about your unethical chocolate habit.

Another theory based on external influences is this. Suppose you currently believe with 50% probability that you have disease X, and that does indeed fill you with 50% dread. However because it isn’t common knowledge, you are generally treated as if the chance were much lower. You are still officially well. If you actually discover that you have the disease, you are expected to tell people, and that will become much more than twice as unpleasant socially. Perhaps even beside the direct social effects, having others around you treat you as officially well makes you feel more confident in your good health.

This makes more sense in the case of a partner cheating. If you actually find out that they are cheating it is more likely to become public knowledge that you know, in which case you will be expected to react and to be humiliated or hurt. This is much worse than being treated as the official holder of a working relationship, regardless of your personal doubts.

This theory seems to predict less preference for ignorance in the academic test case, because until the test comes out students don’t have so much of an assumed status. But this theory predicts that a person who is generally expected to do well on tests will be more averse to finding out than a person who usually does less well, if they have the same expectation of how well they went. It also predicts that if you are already thought to be unwell, or failing school or in a failing marriage, you will usually be particularly keen to get more information. It can only improve your official status, even if your private appraisal is already hopeful in proportion to the information you expect to receive.

I have not much idea if this theory is right. What are other cases where people don’t want more information, all things equal? Does social perception play much part? What are other theories?

13 responses to “When not to know?

  1. Perhaps it’s a simple utilitarian argument: more information might not actually improving things (or not improve people’s assessment of the situation) so the only thing that comes across is the negative emotion. I agree that ignorance probably “protects” people too.
    In academic testing it might be the assumption that they’ll do poorly — that happens to many people — or if they believe themselves to be superior why would they want to have to “prove” it and find that they aren’t as superior as they thought they were.

    just so you know, I thinky you have a typo in the second section: “prefer belief in a y% chance of X over a y% chance of believing X, “

  2. Point 1:

    There have been a dozen or so big ideas that really shook up my thinking. One of them was the concept of utility functions: happiness based on inputs of 2x might not be 2 * happiness of (x). In fact, it’s almost guaranteed. Having two dogs makes me happy. Three dogs might make me a BIT happier. 200 dogs? Definitely not 100x happier.

    In my mind I’ve binned U(x) in the same space as the S boxes in DES (the digital encryption standard) and the “curves” tool in Gimp. …so I’m reaching for that same old shop worn tool here.

    Knowing w certainty that I was going to die in the next year would drastically decrease my happiness – I would obsess about whether I was spending each day of my life well, etc. Knowing that I would die in 2013 would not be much better. Same for knowing that I was going to die in 2014.

    Even knowing that I was going to die at a ripe old age would provide relief…but not enough to counteract the sequence of summed negatives to the right of the big Sigma in the series.

    I’m handwaving to try to figure out exactly how to characterize mapping function that converts knowledge-of-death-in-year(X) = happiness-Y, but the insight I’m looking for is eluding me.

    Oh well.

    Point 2:

    If I knew the date of my death, I’d feel compelled to do…well, I don’t know exactly what. But I do know that I’d regard each moment that didn’t have “Hallmark movie “meta tags on it as sub optimal. I should be spending time w family and friends, or seeing amazing sights, or something right?

    And yet, in real life, I enjoy flipping through a woodworking magazine, or practicing my (poor) guitar skills to help make them better. Both trivial indulgences and investments would seem contraindicated.

  3. This is tangential to our discussion, because you assume that the new information will reduce uncertainty, but Teddy has some work on situations where new information actually increases uncertainty. He calls the phenomenon “dilation” and has some links to papers on his webpage: http://www.hss.cmu.edu/philosophy/faculty-seidenfeld.php
    Of course, if information is going to increase uncertainty, you might not want to learn it.

    As for when you might not want to know something – perhaps you want to learn at a particular time, or from a particular person, so as to produce the best outcome. e.g. I don’t want to hear the ending of a film before I enter the cinema. That could justify prolonging ignorance in many circumstances.

    • I took a look at the first dilation paper… I think it’s misleading to characterize the new information as “increasing uncertainty”. For instance, in the two coin flip example at the beginning of the paper, your estimate of the 3-tuple (p, H_1, H_2) without knowing the result of the first flip has one distribution; after knowing the first flip, your updated estimate should have all the probability concentrated in half the space. Yes, this update has no practical value re: estimating H_2, but you don’t actually know *less*.

  4. Greg Kavka once wrote a paper on these cases, early 90s, maybe it was called “Some Benefits of Uncertainty,”

  5. Katja, you mentioned the case of a *venerial* disease, which seems much easier to explain than the others. If you model yourself as a selfish utilitarian subject to deontological moral constraints (which seems like a decent approximation to the behavior of many people), which, for example, require you to inform partners if you believe with >= 95% confidence that you have disease X, then remaining in a state of 50/50 ignorance better satisfies your selfish preferences.

  6. The notions that:
    – Individuals having something approximating our (ideologically based) notion of conscious choice
    – Any sort of behavior has ramifications other than for the moment of action
    – Any animal brain has the ability to take in, let alone process, let alone processing leading to “choice”
    – Anything discussed in the post is anything other than epiphenomenal, post hoc, etc

    …well, these ideas seem quaint. Sensory processing and behavior triggering occur in milliseconds. Frontal lobe processing later, consciousness much later, verbal processing even later.

    Frankly, conscious-verbal processing doesn’t seem to matter much to anything.

  7. Another kind of case where people should prefer not to believe true things is where they know (or strongly suspect) antecedently that that information will be misleading.

    Now, it’s initially hard to think of real-life cases where you know in advance that a lot of true information will be misleading, but not be able to use that information itself to avoid being misled. But here are a few ways this might happen, I think:

    When reading the warning on medicine, or getting medical test results, or reading an instruction manual, I don’t want to be bombarded with lots of information that means little to me, even when that information is all true: I risk not seeing the forest for the trees, I risk being misled about which bits of information I am being given are most important, and I risk being in a “a little knowledge is a dangerous thing” kind of case, in various ways – by risking believing I have all the relevant information when I don’t, for example. And even knowing that there are these risks in advance doesn’t allow me to deal ideally with the large info-dump of information. In such cases, I’d sometimes rather be told much less (by a simplified instruction manual, or a short version of a test report).

    These cases don’t need to be “high emotion” cases, and are not directly relevant to e.g. why someone who suspected they were HIV positive would avoid getting a test. But I take it that it’s relevant to the more general question of your post.

    I guess these cases illustrate another reason why I might prefer to be told less rather than more: when it is irritating or boring to sort through the information to get what I want from it. Presumably when people click through liscence agreements without reading them, this is part of what is going on, even if they are curious to some extent about what they’re agreeing to and what rights they are giving up by clicking the “agree” button.

  8. Your mention of people being wilfully ignorant about situations with likely negative emotional outcomes brings to mind a couple of articles about how the brain can (or does) treat emotional ‘pain’ similarly to physical pain.[1][2]
    People will take some pretty drastic measures to avoid (physically) painful situations, especially if they have experienced a similar situation before.
    Maybe this is just people preferring to endure a state of mild cognitive dissonance over a painfully situation which could stay with them for ever.
    [1]http://www.sciencedaily.com/releases/2011/03/110328151726.htm
    [2]http://news.bbc.co.uk/2/hi/7512107.stm

  9. Pingback: Garbage in, garbage out « Blunt Object

  10. One might wonder how this breakdown into categories maps onto Nick Bostrom’s information hazards typology.

  11. I think this phenomenon is related to the notion of an Ugh Field (http://lesswrong.com/lw/21b/ugh_fields/). When I do things like that it usually feels like ‘I know I should find out the results, but I can’t bring myself to look at them’ which is a lot like other Ugh Fields.

  12. Hedonic Treader

    “more?”

    Yes. Snape kills Dumbledore.

Leave a reply to Lizzie Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.