Tag Archives: Anthropics

Who observes being you, and do they cheat in SIA?

Warning: this post is somewhat technical – looking at this summary should help.

1,000,000 people are in a giant urn. Each person is labeled with a number (number 1 through number 1,000,000).
A coin will be flipped. If heads, Large World wins and 999,999 people will be randomly selected from the urn. If tails, Small World wins and 1 person will be drawn from the urn.
After the coin flip, and after the sample is selected, we are told that person #X was selected (where X is an integer between 1 and 1,000,000).
Prior probability of Large World: P(heads)=0.5
Posterior probability of Large World: P(heads|person #X selected)=P(heads)=0.5
Regardless of whether the coin landed heads or tails, we knew we would be told about some person being selected. So, the fact that we were told that someone was selected tells us nothing about which world we are in.

Jason Roy argues that the self indication assumption (SIA) is equivalent to such reasoning, and thus wrong. For the self indication assumption to be legitimate it would have to be analogous to a selection procedure where you can only ever hear about person number 693465 for instance – if they don’t come up you hear nothing.

In both cases you can only hear about one person in some sense, the question is whether which person you could hear about was chosen before the experiment, or afterwards from those which came up. The self indication assumption looks at first like a case of the latter; nothing that can be called you existed before the experiment to have dibs on a particular physical arrangement if it came up, and you certainly didn’t start thinking about the self indication assumption until you were well chosen. These things are not really important though.

Which selection procedure is analogous to using SIA seems to depend on what real life thing corresponds to ‘you’ in the thought experiment when ‘you’ are told about people being pulled out of the urn. If ‘you’ are a unique entity with exactly your physical characteristics, then if you didn’t exist, you wouldn’t have heard of someone else – someone else would have heard of someone else. Here SIA stands; my number was chosen before the experiment as far as I’m concerned, even if I wasn’t there to choose it.

On the other hand ‘you’ can be thought of as an abstract observer who has the same identity regardless of characteristics. Then if a person with different characteristics existed instead of the person with your current ones, it’s just you observing a different first-person experience. Then it looks like you are taking a sample from those who exist, as in the second case, so it seems SIA fails.

This isn’t a question of which of those things exists. They are both coherent enough concepts that could refer to real things. Should they both be participating in their own style of selection procedure then, and reasoning accordingly? Your physical self discovering with utmost shock that it exists while the abstract observer looks on non-plussedly? No – they are the same person with the same knowledge now, so they should really come to the same conclusion.

Look more closely at the lot of the abstract observer. Which abstract observers get to exist if there are different numbers of people? If they can only be one person at once, then in a smaller world some observers who would have been around in the bigger world must miss out. Which means finding that you have the person with any number X should still make you update in favor of the big world, exactly as much as the entity defined by those physical characteristics should; abstract observers weren’t guaranteed to have existed exist either.

What if the abstract observer experiencing the selection procedure is defined to encompass all observerhood? There is just one observer, who always exists, and either observes lots of creatures or few, but in a disjointed manner such that it never knows if it observes more than the present one at a given time. If it finds itself observing anyone now it isn’t surprised to exist, nor to see the particular arbitrary collection of characteristics it sees – it was bound to see one or another. Now can we write off SIA?

Here the creature is in a different situation to any of Roy’s original ones. It is going to be told about all the people who come up, not just one. It is also in the strange situation of forgetting all but one of them at a time. How should it reason in this new scenario? In ball urn terms, this is like pulling all of the balls out of whatever urn comes up, one by one, but destroying your memories after each one. Since the particular characteristics don’t tell you anything here, this is basically a version of the sleeping beauty problem. Debate has continued on that for a decade, so I shan’t try to answer Roy by solving it now. SIA gives the popular ‘thirder’ position though, so looking at the selection procedure in this perspective does not undermine SIA further.

Whether you think of the selection procedure experienced by an exact set of physical characteristics, an abstract observer, or all observerhood as one, using SIA does not amount to being surprised after the fact by the unlikelihood of whatever number comes up.

Anthropic summary

I mean to write about anthropic reasoning more in future, so I offer you a quick introduction to a couple of anthropic reasoning principles. There’s also a link to it in ‘pages’ in the side bar. I’ll update it later – there are arguments I haven’t written up yet, plus I’m in the middle of reading the literature, so hope to come across more good ones there.

SIA on other minds

Another interesting implication if the self indication assumption (SIA) is right is that solipsism is much less likely correct than you previously thought, and relatedly the problem of other minds is less problematic.

Solipsists think they are unjustified in believing in a world external to their minds, as one only ever knows one’s own mind and there is no obvious reason the patterns in it should be driven by something else (curiously, holding such a position does not entirely dissuade people from trying to convince others of it). This can then be debated on grounds of whether a single mind imagining the world is more or less complex than a world causing such a mind to imagine a world.

The problem of other minds is that even if you believe in the outside world that you can see, you can’t see other minds. Most of the evidence for them is by analogy to yourself, which is only one ambiguous data point (should I infer that all humans are probably conscious? All things? All girls? All rooms at night time?).

SIA says many minds are more likely than one, given that you exist. Imagine you are wondering whether this is World 1, with a single mind among billions of zombies, or World 2, with billions of conscious minds. If you start off roughly uncertain, updating on your own conscious existence with SIA shifts the probability of world 2 to billions of times the probability of world 1.

Similarly for solipsism. Other minds probably exist. From this you may conclude the world around them does too, or just that your vat isn’t the only one.

SIA doomsday: The filter is ahead

The great filter, as described by Robin Hanson:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?

I will argue that we are not far along at all. Even if the steps of the filter we have already passed look about as hard as those ahead of us, most of the filter is probably ahead. Our bright future is an illusion; we await filtering. This is the implication of applying the self indication assumption (SIA) to the great filter scenario, so before I explain the argument, let me briefly explain SIA.

SIA says that if you are wondering which world you are in, rather than just wondering which world exists, you should update on your own existence by weighting possible worlds as more likely the more observers they contain. For instance if you were born of an experiment where the flip of a fair coin determined whether one (tails) or two (heads) people were created, and all you know is that and that you exist, SIA says heads was twice as likely as tails. This is contentious; many people think in such a situation you should think heads and tails equally likely. A popular result of SIA is that it perfectly protects us from the doomsday argument. So now I’ll show you we are doomed anyway with SIA.

Consider the diagrams below. The first one is just an example with one possible world so you can see clearly what all the boxes mean in the second diagram which compares worlds. In a possible world there are three planets and three stages of life. Each planet starts at the bottom and moves up, usually until it reaches the filter. This is where most of the planets become dead, signified by grey boxes. In the example diagram the filter is after our stage. The small number of planets and stages and the concentration of the filter is for simplicity; in reality the filter needn’t be only one unlikely step, and there are many planets and many phases of existence between dead matter and galaxy colonizing civilization. None of these things are important to this argument.

.

Diagram key

.

The second diagram shows three possible worlds where the filter is in different places. In every case one planet reaches the last stage in this model – this is to signify a small chance of reaching the last step, because we don’t see anyone out there, but have no reason to think it impossible. In the diagram, we are in the middle stage, earthbound technological civilization say. Assume the various places we think the filter could be are equally likely..

SIA doom

.

This is how to reason about your location using SIA:

  1. The three worlds begin equally likely.
  2. Update on your own existence using SIA by multiplying the likelihood of worlds by their their population. Now the likelihood ratio of the worlds is 3:5:7
  3. Update on knowing you are in the middle stage. New likelihood ratio: 1:1:3. Of course if we began with an accurate number of planets in each possible world, the 3 would be humungous and we would be much more likely in an unfiltered world.

Therefore we are much more likely to be in worlds where the filter is ahead than behind.

—-

Added: I wrote a thesis on this too.