Category Archives: 1

Person moments make sense of anthropics

Often people think that various forms of anthropic reasoning require you to change your beliefs in ways other than conditionalizing on evidence. This is false, at least in the cases I know of. I shall talk about Frank Arntzenius‘ paper Some Problems for Conditionalization and Reflection [gated] because it explains the issue well, though I believe his current views agree with mine.

He presents five thought experiments: Two Roads to Shangri La, The Prisoner, John Collins’s Prisoner, Sleeping Beauty and Duplication. In each of them, it seems the (arguably) correct answer violates van Fraassen’s reflection principle, which basically says that if you expect to believe something in the future without having been e.g. hit over the head between now and then, you should believe it now. For instance the thirder position in Sleeping Beauty seems to violate this principle because before the experiment Beauty believes there is a fifty percent chance of heads, and that when she wakes up she will think there is a thirty three percent chance. Arntzenius argued that these seemingly correct answers really are the correct ones, and claimed that they violate the reflection principle because credences can evolve in two ways other than by conditionalization.

First he said credences can shift, for instance through time. I know that tomorrow I will have a higher credence in it being Monday than I do today, and yet it would not be rational for me to increase my credence in it being Monday now on this basis. They can also ‘spread out’. For instance if you know you are in Fairfax today, and that tomorrow a perfect replica of your brain experiencing Fairfax will be made and placed in a vat in Canberra, tomorrow your credence will go from being concentrated in Fairfax to being spread between there and Canberra. This is despite no damage having been done to your own brain. As Arntzenius pointed out, such an evolution of credence looks like quite the opposite of conditionalization, since conditionalization consists of striking out possibilities that your information excludes – it never opens up new possibilities.

I agree that beliefs should evolve in these two ways. However they are both really conditionalization, just obscured. They make sense as conditionalization when you think of them as carried out by different momentary agents, based on the information they infer from their connections to other momentary agents with certain beliefs (e.g. an immediately past self).

Normal cases can be considered this way quite easily. Knowing that you are the momentary agent that followed a few seconds after an agent who knew a certain set of facts about the objective world, and who is (you assume) completely trustworthy, means you can simply update the same prior with those same facts and come to the same conclusion. That is, you don’t really have to do anything. You can treat a stream of moments as a single agent. This is what we usually do.

However sometimes being connected in a certain way to another agent does not make everything that is true for them true for you. Most obviously, if they are a past self and know it is 12 o clock, your connection via being their one second later self means you should exclude worlds where you are not at time 12:00:01. You have still learned from your known relationship to that agent and conditionalized, but you have not learned that what is true of them is true of you, because it isn’t. This is the first way Arntzenius mentioned that credences seem to evolve through time not by by conditionalization.

The second way occurs when one person-moment is at location X, and another person moment has a certain connection to the person at X, but there is more than one possible connection of that sort. For instance when two later people both remember being an earlier person because the earlier person was replicated in some futuristic fashion. Then while the earlier person moment could condition on their exact location, the later one must condition on being in one of several locations connected that way to the earlier person’s location, so their credence spreads over more possibilities than that of the earlier self. If you call one of these later momentary agents the same person as the earlier one, and say they are conditionalizing, it seems they are doing it wrong. But considered as three different momentary people learning from their connections they are just conditionalizing as usual.

What exactly the later momentary people should believe is a matter of debate, but I think that can be framed entirely as a question of what their state spaces and priors look like.

Momentary humans almost always pass lots of information from one to the next, chronologically along chains of memory through non-duplicated people, knowing their approximate distance from one another. So most of the time they can treat themselves as single units who just have to update on any information coming from outside, as I explained. But conditionalization is not specific to these particular biological constructions; and when it is applied to information gained through other connections between agents, the resulting time series of beliefs within one human will end up looking different to that in a chain with no unusual extra connections.

This view also suggests that having cognitive defects, such as memory loss, should not excuse anyone from having credences, as for instance Arntzenius argued it should in his paper Reflections on Sleeping Beauty: “in the face of forced irrational changes in one’s degrees of belief one might do best simply to jettison them altogether”. There is nothing special about credences derived from beliefs of a past agent you identify with. They are just another source of information. If the connection to other momentary agents is different to usual, for instance through forced memory loss, update on it as usual.

How to talk to yourself

Scandinavian Airlines (SAS) airplane on Kiruna...

Image via Wikipedia

Mental module 2: Eeek! Don’t make me go on that airplane! We will surely die! No no no!

Mental module 1: There is less than one in a million chance we die if we get on that airplane, based on actual statistics from as far as you are concerned identical airplanes.

Mental module 2: No!! it’s a big metal box in the sky – that can’t work. Panic! Panic!

Mental module 1: If we didn’t have an incredible pile of data from other big metal boxes in the sky your argument would have non-negligible bearing on the situation.

Mental module 2: but what if it crashes??

Mental module 1: Our lives would be much nicer if you paid attention to probabilities as well as how you feel about outcomes.

Mental module 2: It will shudder and tip over and we will not know how to update our priors on that, and we will be terrified, briefly, before we die!

Mental module 1: If it shuddering and tipping over were actually good evidence the plane was going to crash, there would presently be an incredibly small chance of them occurring, so you need not worry.

Mental module 2: We could crash into the rocks!!! Rocks! In our face! at terminal velocity! And bits of airplane! Do you remember that movie where an airplane crashed? There were bits of burning people everywhere. And what about those pictures you saw on the news? It’s going to be terrible. Even if we survive we will probably be badly injured and in the middle of a jungle, like that girl on that documentary. And what if we get deep vein thrombosis? We might struggle half way out of the jungle on one leg only to get a pulmonary embolism and suddenly die with no hope of medical help, which probably wouldn’t help anyway.

Mental module 1: (realizing something) But Me 2, we identify with being rational, like clever people we respect. Thinking the plane is going to crash is not rational.

Mental module 2: Yeah, rationality! I am so rational. Rationality is the greatest thing, and we care about it infinitely much! Who cares if the plane is really going to crash – I sure won’t believe it will, because that’s not rational!

Mental module 1: (struggling to overcome normal urges) Yes, now you understand.

Mental module 2: and even when it’s falling from the sky I won’t be scared, because that would not be rational! And when we smash into the ground, we will die for rationality! Behold my rationality!

Mental module 1: (to herself and onlookers from non-fictional universes) It may seem reasonable to reason with yourself, but after years of attempting it – just because that’s what come’s naturally – I think doing so relies on a false assumption. Which is that other mental modules are like me somewhere deep down, and will eventually be moved by reasonable arguments, if only they get enough of them to overcome their inferior reasoning skills. Perhaps I have assumed this because I would like it to be true, or just because it is easiest to picture others as being like oneself.

In reality, the assumption is probably false. If part of your brain (or social network) doesn’t respond sensibly to information for the first week – or decade – of your acquaintance, you should be entertaining the possibility that they are completely insane. It is not obvious that well reasoned arguments are the best strategy for dealing with an insane creature, or for that matter with almost any object. Well reasoned arguments are probably not what you use with your ferret or your fire alarm.

Even if the mental module’s arguments are always only a bit flawed and can easily be corrected, resist the temptation to persist in correcting them if it isn’t working. An ongoing stream of slightly inaccurate arguments leading to the same conclusion is a sign that the arguments and the conclusion are causally connected in the wrong direction. In such cases, accuracy is futile.

Mental module 2 is a prime example, alas. She basically just expresses and reacts to emotions connected to whatever has her attention, and jumps to ‘implications’ through superficial associations. She doesn’t really do inference and probability is a foreign concept. The effective ways to cooperate with her then are to distract her with something prompting more convenient emotions, or to direct her attention toward different emotional responses connected to the present issue. Identifying with being rational is a useful trick because it provides a convenient alternative emotional imperative – to follow the directions of the more reasonable part of oneself – in any situation where the irrational mental module can picture a rationalist.

Mental module 2: Oh yes! I’m so rational I tricked myself into being rational!

Don’t warn nonspecifically!

Warning Sign Phillip Island Victoria

This is a decent warning sign. Image via Wikipedia

I hate safety warnings. It’s not that I’m hurt by someone out there’s condescending belief that I can’t work out whether irons are for drying children. And I welcome the endless mental accretion of terrifying facts about obscure ways one can die. What really bothers me is that safety warnings often contain no information except ‘don’t do X’.

In a world covered in advice not to do X, and devoid of information about what will happen if you do X, except it will be negative sometimes, it is hard and irritating to work out when it is appropriate to do X. Most things capable of being costly are a good idea some of the time. And if you were contemplating doing X, you probably have some reason. On top of that, as far as I can tell many of the warnings are about effects so weak that if you wanted to do X for some reason, that would almost certainly overwhelm the reason not to. But since all you are ever told is not to do X, you are never quite sure whether you are being warned off some trivial situation where a company haven’t actually tested whether their claims about their product still apply, or protected from a genuine risk.

My kettle came with a warning that if I ever boil it dry, I should replace it. Is this because it will become liable to explode? Because it might become discoloured? My sandwich meat came with a warning not to eat it after seven days. Presumably this is because they can’t guarantee a certain low level of risk after that, but since I don’t know what that level is, it’s not so useful to me. If I have a lot else to eat I will want a lower level of risk than if I’m facing the alternative of having to go shopping right now or of fainting from hunger. Medical warnings are very similar.

Perhaps it’s sensible to just ignore warnings when they conflict much with your preconceptions or are costly. In that case, how am I worse off than if there just weren’t warnings? How can I complain about people not giving me enough information? What obligation do they have to give me any?

There is the utilitarian argument that telling me would be much more beneficial than it is costly. But besides that, I think I am often worse off than if warning givers just shut up most of the time. Ignoring warnings is distracting and psychologically costly, even if you have decided that that’s the best way to treat them. There is a definite drop in sandwich enjoyableness if it’s status as ‘past its use by date’ lingers in your mind. It’s hard to sleep after being told that you should rush to an emergency room.

I presume there are heaps of pointless warnings because they avoid legal trouble. But this doesn’t explain why they all contain so little information. It is more effort to add information of course. But such a minuscule bit more: if you think people shouldn’t do X, presumably you have a reason already, you just have to write it down. If you can’t write it down, you probably shouldn’t be warning. An addition of a few words to the standard label or sign can’t be noticeably expensive. For more important risks, knowing the reason should encourage people to follow the advice more because they can distinguish them from unimportant risks. For unimportant risks, knowing the reason should encourage people to not follow the advice more, allowing them to enjoy the product or whatever, while leaving the warning writer safe from legal action. Win win! What am I missing?

If ‘birth’ is worth nothing, births are worth anything

It seems many people think creating a life has zero value. Some believe this because they think the average life contains about the same amount of suffering and satisfaction. Others have more conceptual objections, for instance to the notion that a person who does not exist now, and who will otherwise not exist, can be benefited. So they believe that there is no benefit to creating life, even if it’s likely to be a happy life. The argument I will pose is aimed at the latter group.

As far as I know, most people believe that conditional on someone existing in the future, it is possible to help them or harm them. For instance, suppose I were designing a toy for one year olds, and I knew it would take more than two years to go to market. Most people would not think the unborn state of its users-to-be should give me more moral freedom to cover it with poisonous paint or be negligent about its explosiveness.

If we accept this, then conditional on my choosing to have a child, I can benefit the child. For instance if I choose to have a child, I might then consider staying at home to play with the child. Assume the child will enjoy this. If the original world had zero value to the child, relative to the world where I don’t have the child (because we are assuming that being born is worth nothing), then this new world where the child is born and played with must have positive value to the child relative to the world where it is not born.

On the other hand suppose I had initially assumed that I would stay at home to play with any child I had, before I considered whether to have a child. Then according to the assumption that any birth is worth nothing, the world where I have the child and play with it is worth nothing more than the one where I don’t have it. This is inconsistent with the previous evaluation unless you accept that the value of an outcome may  depend on your steps in imagining it.

Any birth could be conceptually divided into a number of acts in this way: creating a person in some default circumstance, and improving or worsening the circumstances in any number of ways. If there is no reason to treat a particular set of circumstances as a default, any amount of value can be attributed to any birth situation by starting with a different default labelled ‘birth’ and setting it to zero value. If creating life under any circumstances is worth nothing, a specific birth can be given any arbitrary value. This seems  harder to believe, and further from usual intuitions, than believing that creating life usually has a non-zero value.

You might think that I’m unfair to interpret ‘creating life is worth nothing’ as ‘birth and anything that might come along with it is worth nothing’, but this is exactly what is usually claimed. That creating a life is worth nothing, even if you expect it to be happy, however happy. I am most willing to agree that some standard of birth is worth nothing, and all those births in happier circumstances are worth more, and those in worse circumstances worth negative values. This is my usual position, and the one that the people I am debating here object to.

If you believe creating a life is in general worth nothing, do you also believe that a specific birth can be worth any arbitrary amount?

Agreement on anthropics

Aumann’s agreement theorem says that Bayesians with common priors who know one another’s posteriors must agree. There’s no apparent reason this shouldn’t apply to posteriors arrived at using indexical information. This does not mean that you and I should both believe we are as likely to be the author of this blog, but that we should agree on the chances that I am.

The Self-Sampling Assumption (SSA) does not allow for this agreement between people with different reference classes, as I shall demonstrate. Consider the figure below. Suppose A people and B people both begin with an equal prior over the two worlds. Everyone knows their type (A or B), but other than that they do not know their location. For instance an A person may be in any of eight places, as far as they know. A people consider their reference class to be A people only. B people consider their reference class to be B people only. The people who are standing next to each other in the diagram meet and exchange their knowledge. For instance an A person meeting a B person will learn that the B person is a B person, and that they don’t know anything much else.

When A people meet B people, they both come to know what the other person’s posterior is. For instance an A person who meets a B person knows that the B person doesn’t know anything except that they are a B person who met an A person. From this the A person can work out the B person’s posterior over which world they are in.

Suppose everyone uses SSA. When an A person and a B person meet, the A people come to think they are four times as likely to be in World 1. This is because in world two, only a quarter of A people meet a B person, whereas in world 1 they all do. The B people they meet cannot agree – in either world they expected to talk with an A person, and for that A person to be pretty sure they are in world 1. So despite knowing one another’s posteriors and having common priors over which world exists, the A and B people who meet must disagree. Not only on one another’s locations within the world, but over which world they are in*.

An example of this would be a husband and wife celebrating their wedding in a Chinese town with poor census data and an ongoing gender gap. The husband exclaims ‘wow, I am a husband! The disparity between gender populations in this town is probably smaller than I thought’. His wife expected in any case that she would end up with a husband who would make this inference from their marriage, and so cannot update and agree with him. Notice that neither partner need think the other has chosen the ‘wrong’ reference class in any way, it might be the reference class they would have chosen were they in that alternative indexical position.

In both of these cases the Self-Indication Assumption (SIA) allows for perfect agreement. Recall SIA weights the probability of worlds by the number of people in them in your situation. When A and B knowingly communicate, they are in symmetrical positions – either side of a communicating A and B pair. Both parties weight their hypotheses by the number of such pairs, and so they agree. Incidentally, when they first found out that they existed, and later when they learned their type, they did disagree. Communicating resolves this, instead of creating a disagreement as with SSA.

*If this does not seem bad enough, they each agree that the other person reasoned as well as they did.

Another implausible implication of this application of SSA is that you will come to agree with creatures that are more similar to you, even if you are certain that a given creature inside your reference class is identical to one outside your reference class in every aspect of its data collection and inference abilities.