Tag Archives: psychology

How much do pictures matter?

George Lakoff has argued that metaphors underlie much of our thought and reasoning:

The science is clear. Metaphorical thought is normal. That should be widely recognized. Every time you think of paying moral debts, or getting bogged down on a project, or losing time, or being at a crossroads in a relationship, you are unconsciously activating a conceptual metaphor circuit in your brain, reasoning using it, and quite possibly making decisions and living your life on the basis of your metaphors. And that’s just normal. There’s no way around it! Metaphorical reason serves us well in everyday life. But it can do harm if you are unaware of it.

A different bike path by Moominmolly

Images also seem to play a big part in most people’s thought.For instance when I think ‘I should go home soon before it gets dark’ there are associated images of my hallway and a curve of the bike path in evening light. I wonder how much the choice of such images influences our behaviour. If the image was of my sofa instead of my hallway, would I be more motivated? If the word ‘dog’ brings to mind an image of a towering beast I saw once, am I less likely to consider purchasing a dog of any kind than if it brings to mind something rabbit sized? If ‘minimum wage’ brings to mind a black triangle of dead weight loss, am I less likely to support a minimum wage than if it brings to mind an image of better paid workers (assuming my understanding of economics and society are the same)? This seems like something people must have studied, but I can’t easily find it.

It seems likely to me that such images would make some difference. If it is so, perhaps I should not let the important ones be chosen so arbitrarily (as far as my conscious mind is concerned).

When not to know?

Jeff at Cheap Talk reports on Andrew Caplin’s good point: making tests less informative can make people better off, because often they don’t want that much information, but may still want a bit.

This reminded me of a more basic question: what makes people want to avoid getting information?

That is, when would people prefer to believe P(X)=y% than to have a y% chance of believing X, and a (1-y)% chance of believing not X?

One such time is when thinking about the question at all would be disconcerting. For instance you may prefer whatever probability distribution you already have over the manners in which your parents may make love, than to consider the question.

Another time is when more uncertainty is useful in itself. A big category of this is when it lets you avoid responsibility. As in, ‘I would love to help, but I’m afraid I have no idea how to wash a cat’, or ‘How unfortunate that I had absolutely no idea that my chocolate comes from slaves, or I would have gone to lots of effort to find ethical chocolate’. If you can signal your ignorance, you might also avoid threats this way.

I’m more interested in situations like the one where you could call the doctor to get the results of your test for venereal disease, but you’d just rather not. Knowing would seem to mostly help you do things you would want to do in the case that you do have such a disease, and you are already thinking about the topic. It seems you actually prefer the uncertainty to the knowledge in themselves. The intuitive interpretation seems to be something like ‘you suspect that you do have such a disease, and knowing will make you unhappy, so you prefer not to find out’. But to the extent you suspect that you have the disease, why aren’t you already unhappy? So that doesn’t explain why you would rather definitely be somewhat unhappy than a chance of being unhappier with a chance of relief from your present unhappiness. And it doesn’t distinguish that sort of case from the more common cases where people like to have information.

A few cases where people often seek ignorance:

  • academic test results which are expected to be bad
  • medical test results
  • especially genetic tendencies to disease
  • whether a partner is cheating
  • more?

Notice that these all involve emotionally charged situations – can you think of some that don’t?

Perhaps there aren’t really any cases where people much prefer belief in a y% chance of X over a y% chance of believing X, without external influences such as from other people expecting you to do something about your unethical chocolate habit.

Another theory based on external influences is this. Suppose you currently believe with 50% probability that you have disease X, and that does indeed fill you with 50% dread. However because it isn’t common knowledge, you are generally treated as if the chance were much lower. You are still officially well. If you actually discover that you have the disease, you are expected to tell people, and that will become much more than twice as unpleasant socially. Perhaps even beside the direct social effects, having others around you treat you as officially well makes you feel more confident in your good health.

This makes more sense in the case of a partner cheating. If you actually find out that they are cheating it is more likely to become public knowledge that you know, in which case you will be expected to react and to be humiliated or hurt. This is much worse than being treated as the official holder of a working relationship, regardless of your personal doubts.

This theory seems to predict less preference for ignorance in the academic test case, because until the test comes out students don’t have so much of an assumed status. But this theory predicts that a person who is generally expected to do well on tests will be more averse to finding out than a person who usually does less well, if they have the same expectation of how well they went. It also predicts that if you are already thought to be unwell, or failing school or in a failing marriage, you will usually be particularly keen to get more information. It can only improve your official status, even if your private appraisal is already hopeful in proportion to the information you expect to receive.

I have not much idea if this theory is right. What are other cases where people don’t want more information, all things equal? Does social perception play much part? What are other theories?

How to talk to yourself

Scandinavian Airlines (SAS) airplane on Kiruna...

Image via Wikipedia

Mental module 2: Eeek! Don’t make me go on that airplane! We will surely die! No no no!

Mental module 1: There is less than one in a million chance we die if we get on that airplane, based on actual statistics from as far as you are concerned identical airplanes.

Mental module 2: No!! it’s a big metal box in the sky – that can’t work. Panic! Panic!

Mental module 1: If we didn’t have an incredible pile of data from other big metal boxes in the sky your argument would have non-negligible bearing on the situation.

Mental module 2: but what if it crashes??

Mental module 1: Our lives would be much nicer if you paid attention to probabilities as well as how you feel about outcomes.

Mental module 2: It will shudder and tip over and we will not know how to update our priors on that, and we will be terrified, briefly, before we die!

Mental module 1: If it shuddering and tipping over were actually good evidence the plane was going to crash, there would presently be an incredibly small chance of them occurring, so you need not worry.

Mental module 2: We could crash into the rocks!!! Rocks! In our face! at terminal velocity! And bits of airplane! Do you remember that movie where an airplane crashed? There were bits of burning people everywhere. And what about those pictures you saw on the news? It’s going to be terrible. Even if we survive we will probably be badly injured and in the middle of a jungle, like that girl on that documentary. And what if we get deep vein thrombosis? We might struggle half way out of the jungle on one leg only to get a pulmonary embolism and suddenly die with no hope of medical help, which probably wouldn’t help anyway.

Mental module 1: (realizing something) But Me 2, we identify with being rational, like clever people we respect. Thinking the plane is going to crash is not rational.

Mental module 2: Yeah, rationality! I am so rational. Rationality is the greatest thing, and we care about it infinitely much! Who cares if the plane is really going to crash – I sure won’t believe it will, because that’s not rational!

Mental module 1: (struggling to overcome normal urges) Yes, now you understand.

Mental module 2: and even when it’s falling from the sky I won’t be scared, because that would not be rational! And when we smash into the ground, we will die for rationality! Behold my rationality!

Mental module 1: (to herself and onlookers from non-fictional universes) It may seem reasonable to reason with yourself, but after years of attempting it – just because that’s what come’s naturally – I think doing so relies on a false assumption. Which is that other mental modules are like me somewhere deep down, and will eventually be moved by reasonable arguments, if only they get enough of them to overcome their inferior reasoning skills. Perhaps I have assumed this because I would like it to be true, or just because it is easiest to picture others as being like oneself.

In reality, the assumption is probably false. If part of your brain (or social network) doesn’t respond sensibly to information for the first week – or decade – of your acquaintance, you should be entertaining the possibility that they are completely insane. It is not obvious that well reasoned arguments are the best strategy for dealing with an insane creature, or for that matter with almost any object. Well reasoned arguments are probably not what you use with your ferret or your fire alarm.

Even if the mental module’s arguments are always only a bit flawed and can easily be corrected, resist the temptation to persist in correcting them if it isn’t working. An ongoing stream of slightly inaccurate arguments leading to the same conclusion is a sign that the arguments and the conclusion are causally connected in the wrong direction. In such cases, accuracy is futile.

Mental module 2 is a prime example, alas. She basically just expresses and reacts to emotions connected to whatever has her attention, and jumps to ‘implications’ through superficial associations. She doesn’t really do inference and probability is a foreign concept. The effective ways to cooperate with her then are to distract her with something prompting more convenient emotions, or to direct her attention toward different emotional responses connected to the present issue. Identifying with being rational is a useful trick because it provides a convenient alternative emotional imperative – to follow the directions of the more reasonable part of oneself – in any situation where the irrational mental module can picture a rationalist.

Mental module 2: Oh yes! I’m so rational I tricked myself into being rational!

Katla on death as entertainment

I’m rather busy this week, so here you have a guest post from my mildly irate, judgemental and intellectually careless friend Katla. NB. We are only friends because we grew up in the same town.

***

As a creature, I have a nicely developed fear of death. I don’t like thinking about death at all. Just the sight of a graveyard, or the ‘deaths’ section of the newspaper, or a living creature that will one day die often plunges my brain into jittery superstition. Like most people, I would probably risk my life to avoid thinking about the fact that my life is at risk. But all this careful aversion and ignorance is wasted when in the middle of my escapism in fiction I come face to face with the death of a fictional colleague. And a small helpless boy, and six friends. And my wife, and a country. And seven gazillion aliens.

For some reason people are dying all over the place in fiction. It’s as if nothing really matters enough in a story unless someone is dead over it. Why?

Most people are with me on the avoiding thinking about death front, in real life. We go to all this trouble to euphemise about it. We hire doctors to make and take responsibility for decisions relating to it. We avoid discovering whether we are at risk for it. We hate it when people we know die. We make up ridiculous stories about how nobody actually ever dies, but have just been taken to a new home. When death happens we cover it in a veil of official meaningfulness, and have a big ceremony, hoping to convince ourselves that it is a proper and meaningful symbolic event, not the disgusting and horrifying conversion of a person into a corpse. We much prefer to keep our minds on meaning and legacies than to remember there is a dead body lying around. We avoid actually planning this in advance though, because it doesn’t bear thinking about. And so on.

Yet scrolling through the channels it seems most movies have death as a plot element important enough to mention in the blurb. When a stoic government official in post-war Japan learns he has terminal cancer, he suddenly realizes he’s squandered his life on meaningless red tape…this stunning emotional drama recounts the events surrounding Joan of Arc’s 1431 heresy trial, burning at the stake and subsequent martyrdom…An easily spooked guy, Columbus joins forces with wild man Tallahassee to fight for survival in a world virtually taken over by freakish zombies…

The last book I read where people weren’t dying was Pride and Prejudice, which is kind of far into romance to have to go to avoid this phenomenon. If there are spare characters, they die. If there is a point to be made, it is made with someone’s death. If something is important, someone dies to flag it. Fair enough for war stories and action movies, but why should most stories be permeated with death?

Perhaps in some strange way we love death at the same time as fearing it? Like roller coasters, fear in a safe place might be enjoyable. We certainly pick up newspapers and magazines which boast the lowdown on horrific murders. Or perhaps we don’t especially love it, but are drawn to it in the same way that a herd of antelopes doesn’t love a lion’s roar, but nonetheless finds it engaging beyond anything the hell else they could possibly be thinking about. In the same way that it’s hard to be satisfied with romance as an understated implication after you get used to graphic sex, perhaps it is hard to be engaged by the danger of failing at some small quest after getting viciously murdered becomes commonplace.

For most, the answer must be the first – they just love hearing about death in controlled circumstances. Otherwise the fiction makers would probably clue in to general preferences and tend more toward avoiding death. Some people are more like the antelopes. They don’t hate it enough to just avoid going to the movies or to only read romance novels, but they are uncomfortable. You probably don’t care about them, because they are sissy wimps.

Perhaps in fifty years it will be impossible to give proper significance to anything on the screen unless it involves the ass-raping of small children. Do you hope to remain in the laughing majority then? Appreciating the deep significance of that boy’s assault, or the ironic reference to earlier atrocities, or just hooting at the huge number of rapes the hero conducted in a short time, and how dumb his victims looked? Actually there may even be people already who need a good hard rape scene to get their sexual kicks. Well I think your eagerness to see people’s lives ended is about as offputting.

Why are promisers innocent?

It is generally considered unethical to break promises. It is not considered unethical to make promises you would have been better off not to make. Yet when a promise is made and then broken, there is little reason in the abstract to suppose that either the past promiser or the present promise breaker made a better choice about what the future person should do.

Wedding Photography

Image from icaromoreno

For instance suppose a married woman has an affair. Much moral criticism is usually directed at her for having the affair, yet almost none is directed at her earlier self for marrying her husband in the first place.

It’s not that the later woman, who broke the promise, caused more harm than the earlier woman. Both of their acts were needed together to cause the broken promise. The later woman would have been acting just fine if the earlier woman hadn’t done what she did.

I think we direct all criticism to the later women who breaks the promise because it is very useful to be seen as someone who thinks its important to keep promises. It is of little use to be seen as the sort of person who doesn’t make stupid promises, except as far as it suggests we are more likely to keep promises.

This seems to me a clear case of morality being self serving. It serves others too in this case as usual, but the particular form of it is chosen to help its owner. Which is not particularly surprising if you think morality is a bunch of useful behaviours evolved like all our other self serving bits and pieces. However if you think it is more like maths – something which is actually out there, and we have somehow evolved to be able to intuitively appreciate – it is more surprising that it self serving like this.