Monthly Archives: March 2011

How to talk to yourself

Scandinavian Airlines (SAS) airplane on Kiruna...

Image via Wikipedia

Mental module 2: Eeek! Don’t make me go on that airplane! We will surely die! No no no!

Mental module 1: There is less than one in a million chance we die if we get on that airplane, based on actual statistics from as far as you are concerned identical airplanes.

Mental module 2: No!! it’s a big metal box in the sky – that can’t work. Panic! Panic!

Mental module 1: If we didn’t have an incredible pile of data from other big metal boxes in the sky your argument would have non-negligible bearing on the situation.

Mental module 2: but what if it crashes??

Mental module 1: Our lives would be much nicer if you paid attention to probabilities as well as how you feel about outcomes.

Mental module 2: It will shudder and tip over and we will not know how to update our priors on that, and we will be terrified, briefly, before we die!

Mental module 1: If it shuddering and tipping over were actually good evidence the plane was going to crash, there would presently be an incredibly small chance of them occurring, so you need not worry.

Mental module 2: We could crash into the rocks!!! Rocks! In our face! at terminal velocity! And bits of airplane! Do you remember that movie where an airplane crashed? There were bits of burning people everywhere. And what about those pictures you saw on the news? It’s going to be terrible. Even if we survive we will probably be badly injured and in the middle of a jungle, like that girl on that documentary. And what if we get deep vein thrombosis? We might struggle half way out of the jungle on one leg only to get a pulmonary embolism and suddenly die with no hope of medical help, which probably wouldn’t help anyway.

Mental module 1: (realizing something) But Me 2, we identify with being rational, like clever people we respect. Thinking the plane is going to crash is not rational.

Mental module 2: Yeah, rationality! I am so rational. Rationality is the greatest thing, and we care about it infinitely much! Who cares if the plane is really going to crash – I sure won’t believe it will, because that’s not rational!

Mental module 1: (struggling to overcome normal urges) Yes, now you understand.

Mental module 2: and even when it’s falling from the sky I won’t be scared, because that would not be rational! And when we smash into the ground, we will die for rationality! Behold my rationality!

Mental module 1: (to herself and onlookers from non-fictional universes) It may seem reasonable to reason with yourself, but after years of attempting it – just because that’s what come’s naturally – I think doing so relies on a false assumption. Which is that other mental modules are like me somewhere deep down, and will eventually be moved by reasonable arguments, if only they get enough of them to overcome their inferior reasoning skills. Perhaps I have assumed this because I would like it to be true, or just because it is easiest to picture others as being like oneself.

In reality, the assumption is probably false. If part of your brain (or social network) doesn’t respond sensibly to information for the first week – or decade – of your acquaintance, you should be entertaining the possibility that they are completely insane. It is not obvious that well reasoned arguments are the best strategy for dealing with an insane creature, or for that matter with almost any object. Well reasoned arguments are probably not what you use with your ferret or your fire alarm.

Even if the mental module’s arguments are always only a bit flawed and can easily be corrected, resist the temptation to persist in correcting them if it isn’t working. An ongoing stream of slightly inaccurate arguments leading to the same conclusion is a sign that the arguments and the conclusion are causally connected in the wrong direction. In such cases, accuracy is futile.

Mental module 2 is a prime example, alas. She basically just expresses and reacts to emotions connected to whatever has her attention, and jumps to ‘implications’ through superficial associations. She doesn’t really do inference and probability is a foreign concept. The effective ways to cooperate with her then are to distract her with something prompting more convenient emotions, or to direct her attention toward different emotional responses connected to the present issue. Identifying with being rational is a useful trick because it provides a convenient alternative emotional imperative – to follow the directions of the more reasonable part of oneself – in any situation where the irrational mental module can picture a rationalist.

Mental module 2: Oh yes! I’m so rational I tricked myself into being rational!

Don’t warn nonspecifically!

Warning Sign Phillip Island Victoria

This is a decent warning sign. Image via Wikipedia

I hate safety warnings. It’s not that I’m hurt by someone out there’s condescending belief that I can’t work out whether irons are for drying children. And I welcome the endless mental accretion of terrifying facts about obscure ways one can die. What really bothers me is that safety warnings often contain no information except ‘don’t do X’.

In a world covered in advice not to do X, and devoid of information about what will happen if you do X, except it will be negative sometimes, it is hard and irritating to work out when it is appropriate to do X. Most things capable of being costly are a good idea some of the time. And if you were contemplating doing X, you probably have some reason. On top of that, as far as I can tell many of the warnings are about effects so weak that if you wanted to do X for some reason, that would almost certainly overwhelm the reason not to. But since all you are ever told is not to do X, you are never quite sure whether you are being warned off some trivial situation where a company haven’t actually tested whether their claims about their product still apply, or protected from a genuine risk.

My kettle came with a warning that if I ever boil it dry, I should replace it. Is this because it will become liable to explode? Because it might become discoloured? My sandwich meat came with a warning not to eat it after seven days. Presumably this is because they can’t guarantee a certain low level of risk after that, but since I don’t know what that level is, it’s not so useful to me. If I have a lot else to eat I will want a lower level of risk than if I’m facing the alternative of having to go shopping right now or of fainting from hunger. Medical warnings are very similar.

Perhaps it’s sensible to just ignore warnings when they conflict much with your preconceptions or are costly. In that case, how am I worse off than if there just weren’t warnings? How can I complain about people not giving me enough information? What obligation do they have to give me any?

There is the utilitarian argument that telling me would be much more beneficial than it is costly. But besides that, I think I am often worse off than if warning givers just shut up most of the time. Ignoring warnings is distracting and psychologically costly, even if you have decided that that’s the best way to treat them. There is a definite drop in sandwich enjoyableness if it’s status as ‘past its use by date’ lingers in your mind. It’s hard to sleep after being told that you should rush to an emergency room.

I presume there are heaps of pointless warnings because they avoid legal trouble. But this doesn’t explain why they all contain so little information. It is more effort to add information of course. But such a minuscule bit more: if you think people shouldn’t do X, presumably you have a reason already, you just have to write it down. If you can’t write it down, you probably shouldn’t be warning. An addition of a few words to the standard label or sign can’t be noticeably expensive. For more important risks, knowing the reason should encourage people to follow the advice more because they can distinguish them from unimportant risks. For unimportant risks, knowing the reason should encourage people to not follow the advice more, allowing them to enjoy the product or whatever, while leaving the warning writer safe from legal action. Win win! What am I missing?

If ‘birth’ is worth nothing, births are worth anything

It seems many people think creating a life has zero value. Some believe this because they think the average life contains about the same amount of suffering and satisfaction. Others have more conceptual objections, for instance to the notion that a person who does not exist now, and who will otherwise not exist, can be benefited. So they believe that there is no benefit to creating life, even if it’s likely to be a happy life. The argument I will pose is aimed at the latter group.

As far as I know, most people believe that conditional on someone existing in the future, it is possible to help them or harm them. For instance, suppose I were designing a toy for one year olds, and I knew it would take more than two years to go to market. Most people would not think the unborn state of its users-to-be should give me more moral freedom to cover it with poisonous paint or be negligent about its explosiveness.

If we accept this, then conditional on my choosing to have a child, I can benefit the child. For instance if I choose to have a child, I might then consider staying at home to play with the child. Assume the child will enjoy this. If the original world had zero value to the child, relative to the world where I don’t have the child (because we are assuming that being born is worth nothing), then this new world where the child is born and played with must have positive value to the child relative to the world where it is not born.

On the other hand suppose I had initially assumed that I would stay at home to play with any child I had, before I considered whether to have a child. Then according to the assumption that any birth is worth nothing, the world where I have the child and play with it is worth nothing more than the one where I don’t have it. This is inconsistent with the previous evaluation unless you accept that the value of an outcome may  depend on your steps in imagining it.

Any birth could be conceptually divided into a number of acts in this way: creating a person in some default circumstance, and improving or worsening the circumstances in any number of ways. If there is no reason to treat a particular set of circumstances as a default, any amount of value can be attributed to any birth situation by starting with a different default labelled ‘birth’ and setting it to zero value. If creating life under any circumstances is worth nothing, a specific birth can be given any arbitrary value. This seems  harder to believe, and further from usual intuitions, than believing that creating life usually has a non-zero value.

You might think that I’m unfair to interpret ‘creating life is worth nothing’ as ‘birth and anything that might come along with it is worth nothing’, but this is exactly what is usually claimed. That creating a life is worth nothing, even if you expect it to be happy, however happy. I am most willing to agree that some standard of birth is worth nothing, and all those births in happier circumstances are worth more, and those in worse circumstances worth negative values. This is my usual position, and the one that the people I am debating here object to.

If you believe creating a life is in general worth nothing, do you also believe that a specific birth can be worth any arbitrary amount?

Agreement on anthropics

Aumann’s agreement theorem says that Bayesians with common priors who know one another’s posteriors must agree. There’s no apparent reason this shouldn’t apply to posteriors arrived at using indexical information. This does not mean that you and I should both believe we are as likely to be the author of this blog, but that we should agree on the chances that I am.

The Self-Sampling Assumption (SSA) does not allow for this agreement between people with different reference classes, as I shall demonstrate. Consider the figure below. Suppose A people and B people both begin with an equal prior over the two worlds. Everyone knows their type (A or B), but other than that they do not know their location. For instance an A person may be in any of eight places, as far as they know. A people consider their reference class to be A people only. B people consider their reference class to be B people only. The people who are standing next to each other in the diagram meet and exchange their knowledge. For instance an A person meeting a B person will learn that the B person is a B person, and that they don’t know anything much else.

When A people meet B people, they both come to know what the other person’s posterior is. For instance an A person who meets a B person knows that the B person doesn’t know anything except that they are a B person who met an A person. From this the A person can work out the B person’s posterior over which world they are in.

Suppose everyone uses SSA. When an A person and a B person meet, the A people come to think they are four times as likely to be in World 1. This is because in world two, only a quarter of A people meet a B person, whereas in world 1 they all do. The B people they meet cannot agree – in either world they expected to talk with an A person, and for that A person to be pretty sure they are in world 1. So despite knowing one another’s posteriors and having common priors over which world exists, the A and B people who meet must disagree. Not only on one another’s locations within the world, but over which world they are in*.

An example of this would be a husband and wife celebrating their wedding in a Chinese town with poor census data and an ongoing gender gap. The husband exclaims ‘wow, I am a husband! The disparity between gender populations in this town is probably smaller than I thought’. His wife expected in any case that she would end up with a husband who would make this inference from their marriage, and so cannot update and agree with him. Notice that neither partner need think the other has chosen the ‘wrong’ reference class in any way, it might be the reference class they would have chosen were they in that alternative indexical position.

In both of these cases the Self-Indication Assumption (SIA) allows for perfect agreement. Recall SIA weights the probability of worlds by the number of people in them in your situation. When A and B knowingly communicate, they are in symmetrical positions – either side of a communicating A and B pair. Both parties weight their hypotheses by the number of such pairs, and so they agree. Incidentally, when they first found out that they existed, and later when they learned their type, they did disagree. Communicating resolves this, instead of creating a disagreement as with SSA.

*If this does not seem bad enough, they each agree that the other person reasoned as well as they did.

Another implausible implication of this application of SSA is that you will come to agree with creatures that are more similar to you, even if you are certain that a given creature inside your reference class is identical to one outside your reference class in every aspect of its data collection and inference abilities.

Katla on death as entertainment

I’m rather busy this week, so here you have a guest post from my mildly irate, judgemental and intellectually careless friend Katla. NB. We are only friends because we grew up in the same town.

***

As a creature, I have a nicely developed fear of death. I don’t like thinking about death at all. Just the sight of a graveyard, or the ‘deaths’ section of the newspaper, or a living creature that will one day die often plunges my brain into jittery superstition. Like most people, I would probably risk my life to avoid thinking about the fact that my life is at risk. But all this careful aversion and ignorance is wasted when in the middle of my escapism in fiction I come face to face with the death of a fictional colleague. And a small helpless boy, and six friends. And my wife, and a country. And seven gazillion aliens.

For some reason people are dying all over the place in fiction. It’s as if nothing really matters enough in a story unless someone is dead over it. Why?

Most people are with me on the avoiding thinking about death front, in real life. We go to all this trouble to euphemise about it. We hire doctors to make and take responsibility for decisions relating to it. We avoid discovering whether we are at risk for it. We hate it when people we know die. We make up ridiculous stories about how nobody actually ever dies, but have just been taken to a new home. When death happens we cover it in a veil of official meaningfulness, and have a big ceremony, hoping to convince ourselves that it is a proper and meaningful symbolic event, not the disgusting and horrifying conversion of a person into a corpse. We much prefer to keep our minds on meaning and legacies than to remember there is a dead body lying around. We avoid actually planning this in advance though, because it doesn’t bear thinking about. And so on.

Yet scrolling through the channels it seems most movies have death as a plot element important enough to mention in the blurb. When a stoic government official in post-war Japan learns he has terminal cancer, he suddenly realizes he’s squandered his life on meaningless red tape…this stunning emotional drama recounts the events surrounding Joan of Arc’s 1431 heresy trial, burning at the stake and subsequent martyrdom…An easily spooked guy, Columbus joins forces with wild man Tallahassee to fight for survival in a world virtually taken over by freakish zombies…

The last book I read where people weren’t dying was Pride and Prejudice, which is kind of far into romance to have to go to avoid this phenomenon. If there are spare characters, they die. If there is a point to be made, it is made with someone’s death. If something is important, someone dies to flag it. Fair enough for war stories and action movies, but why should most stories be permeated with death?

Perhaps in some strange way we love death at the same time as fearing it? Like roller coasters, fear in a safe place might be enjoyable. We certainly pick up newspapers and magazines which boast the lowdown on horrific murders. Or perhaps we don’t especially love it, but are drawn to it in the same way that a herd of antelopes doesn’t love a lion’s roar, but nonetheless finds it engaging beyond anything the hell else they could possibly be thinking about. In the same way that it’s hard to be satisfied with romance as an understated implication after you get used to graphic sex, perhaps it is hard to be engaged by the danger of failing at some small quest after getting viciously murdered becomes commonplace.

For most, the answer must be the first – they just love hearing about death in controlled circumstances. Otherwise the fiction makers would probably clue in to general preferences and tend more toward avoiding death. Some people are more like the antelopes. They don’t hate it enough to just avoid going to the movies or to only read romance novels, but they are uncomfortable. You probably don’t care about them, because they are sissy wimps.

Perhaps in fifty years it will be impossible to give proper significance to anything on the screen unless it involves the ass-raping of small children. Do you hope to remain in the laughing majority then? Appreciating the deep significance of that boy’s assault, or the ironic reference to earlier atrocities, or just hooting at the huge number of rapes the hero conducted in a short time, and how dumb his victims looked? Actually there may even be people already who need a good hard rape scene to get their sexual kicks. Well I think your eagerness to see people’s lives ended is about as offputting.