Warning: this post is somewhat technical – looking at this summary should help.
1,000,000 people are in a giant urn. Each person is labeled with a number (number 1 through number 1,000,000).
A coin will be flipped. If heads, Large World wins and 999,999 people will be randomly selected from the urn. If tails, Small World wins and 1 person will be drawn from the urn.
After the coin flip, and after the sample is selected, we are told that person #X was selected (where X is an integer between 1 and 1,000,000).
Prior probability of Large World: P(heads)=0.5
Posterior probability of Large World: P(heads|person #X selected)=P(heads)=0.5
Regardless of whether the coin landed heads or tails, we knew we would be told about some person being selected. So, the fact that we were told that someone was selected tells us nothing about which world we are in.
Jason Roy argues that the self indication assumption (SIA) is equivalent to such reasoning, and thus wrong. For the self indication assumption to be legitimate it would have to be analogous to a selection procedure where you can only ever hear about person number 693465 for instance – if they don’t come up you hear nothing.
In both cases you can only hear about one person in some sense, the question is whether which person you could hear about was chosen before the experiment, or afterwards from those which came up. The self indication assumption looks at first like a case of the latter; nothing that can be called you existed before the experiment to have dibs on a particular physical arrangement if it came up, and you certainly didn’t start thinking about the self indication assumption until you were well chosen. These things are not really important though.
Which selection procedure is analogous to using SIA seems to depend on what real life thing corresponds to ‘you’ in the thought experiment when ‘you’ are told about people being pulled out of the urn. If ‘you’ are a unique entity with exactly your physical characteristics, then if you didn’t exist, you wouldn’t have heard of someone else – someone else would have heard of someone else. Here SIA stands; my number was chosen before the experiment as far as I’m concerned, even if I wasn’t there to choose it.
On the other hand ‘you’ can be thought of as an abstract observer who has the same identity regardless of characteristics. Then if a person with different characteristics existed instead of the person with your current ones, it’s just you observing a different first-person experience. Then it looks like you are taking a sample from those who exist, as in the second case, so it seems SIA fails.
This isn’t a question of which of those things exists. They are both coherent enough concepts that could refer to real things. Should they both be participating in their own style of selection procedure then, and reasoning accordingly? Your physical self discovering with utmost shock that it exists while the abstract observer looks on non-plussedly? No – they are the same person with the same knowledge now, so they should really come to the same conclusion.
Look more closely at the lot of the abstract observer. Which abstract observers get to exist if there are different numbers of people? If they can only be one person at once, then in a smaller world some observers who would have been around in the bigger world must miss out. Which means finding that you have the person with any number X should still make you update in favor of the big world, exactly as much as the entity defined by those physical characteristics should; abstract observers weren’t guaranteed to have existed exist either.
What if the abstract observer experiencing the selection procedure is defined to encompass all observerhood? There is just one observer, who always exists, and either observes lots of creatures or few, but in a disjointed manner such that it never knows if it observes more than the present one at a given time. If it finds itself observing anyone now it isn’t surprised to exist, nor to see the particular arbitrary collection of characteristics it sees – it was bound to see one or another. Now can we write off SIA?
Here the creature is in a different situation to any of Roy’s original ones. It is going to be told about all the people who come up, not just one. It is also in the strange situation of forgetting all but one of them at a time. How should it reason in this new scenario? In ball urn terms, this is like pulling all of the balls out of whatever urn comes up, one by one, but destroying your memories after each one. Since the particular characteristics don’t tell you anything here, this is basically a version of the sleeping beauty problem. Debate has continued on that for a decade, so I shan’t try to answer Roy by solving it now. SIA gives the popular ‘thirder’ position though, so looking at the selection procedure in this perspective does not undermine SIA further.
Whether you think of the selection procedure experienced by an exact set of physical characteristics, an abstract observer, or all observerhood as one, using SIA does not amount to being surprised after the fact by the unlikelihood of whatever number comes up.
Thanks for all the interesting posts. I’ve been intrigued by the SIA and I think the Beauty problem you mention might help clarify why it’s wrong.
Beauty
I wake up on average once for heads and twice for tails; therefore, on each wake-up my probability of heads is 1/3 and tails is 2/3.
SIA
I exist on average once in World A and twice in World B; therefore, each time I exist my probability of World A is 1/3 and B is 2/3.
BUT
Beauty knows that she had 50% probability of waking up twice. She needs this fact to infer that 2/3 of her wake-ups are tails and 1/3 heads. That is, she wakes up with 50% probability on Monday with heads, and with 50% probability on Monday AND Tuesday with tails. From Beauty’s perspective, she has two equal chances to wake up with tails and only one chance to wake up with heads – giving the 1/3 heads probability.
If the probability of heads were 90% instead of 50%, Beauty would wake up with 90% probability on Monday with heads, and with 10% probability on Monday AND Tuesday with tails. From her perspective, she still has two chances of waking up with tails, doubling that relative probability. So now she has a 90/110 chance of waking up to heads and a 20/110 chance of waking up to tails.
I, unlike Beauty, know nothing about alternative worlds at all. I don’t know if there are 2 possible worlds or 10,000, and I also know nothing about the frequency of me in any of these worlds. Maybe there are 10,000 possible worlds with nobody and just one with a single me. Until I know something about the space of possible worlds, I can’t make inferences about the relative probability of me being in them.
I think the correct SIA analogy for Beauty would be:
Beauty wakes up with amnesia, knowing nothing. What can Beauty say about the length of time she has been asleep?
Following the SIA, she should reason that she is more likely to be awake in a world where she spends more time awake, so she should guess that she slept for the shortest time possible.
But for all Beauty knows, there might be ten thousand possible worlds where she slept for a year and one where she slept for an hour. Perhaps in 99% of possible worlds, she never woke up. Perhaps she was always asleep until now in every single possible world. She has no reason to think that any of these probability spaces of possible worlds is more likely than any other, and therefore no grounds for reasoning about the time she has slept.
I believe I’ve been “asleep” for about 13.75 billion years and “awake” for 34. But I believe this for reasons that have nothing to do with the likelihood of my own existence. I think it would be a mistake to infer that the universe was created moments before I started observing it, just because I’ve been observing it ever since!
“Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.”
Anyone who reads that statement will think the rule should apply to them. Rightfully so, as that is the intention of the principle.
It’s therefore unclear to me why there is a question about who ‘you’ is. Any conscious observer will be convinced they are the ‘you’ we are talking about. Uncontroversially, I think, “‘you’ can be thought of as an abstract observer who has the same identity regardless of characteristics.”
No matter who is drawn from the urn, they will think it must mean something significant. They will argue “my number was chosen before the experiment as far as I’m concerned.”
That conscious observer who is wondering whether s/he is in big world knows that there is at least one conscious observer out there. Unfortunately, we knew that would be the case before the experiment, so there is nothing to update on.
“If they can only be one person at once, then in a smaller world some observers who would have been around in the bigger world must miss out. Which means finding that you have the person with any number X should still make you update in favor of the big world, exactly as much as the entity defined by those physical characteristics should; abstract observers weren’t guaranteed to have existed exist either.”
At least one abstract observer was guaranteed to exist. With probability one there was going to be an abstract observer. Probability law says no updating in that case.
Suppose I flip the coin. I don’t tell you the result of the coin flip, but I do tell you that person #566,434 was selected. Are you saying that that would make you more confident that heads was selected then you were prior to knowing person #566,434 was selected? Well, then every time I flipped the coin and told you a person #, you would guess heads. And half of the time you’d be wrong (which is the worst possible success rate).
If it’s not the case that you would update in favor of heads, then I don’t understand this statement: “Which means finding that you have the person with any number X should still make you update in favor of the big world”
I wouldn’t update in such a case, because I would know I was guaranteed to hear a number that came up. If I knew that different people were told each of the numbers that came up and that I wasn’t guaranteed to get one I would update, and that is the case that’s analogous to using SIA.
I disagree with your use of the principle that you shouldn’t update on things that have probability one of occurring. If they have probability one of occurring somewhere to someone that’s often different to them having probability one of occurring to you now, whatever ‘you’ might be. For instance if you were told you had been selected for a lottery, along with either 9 or a 999,999 others, presumably you would guess it was the big lottery. Though if you won you would go back to thinking them equally likely. In both cases someone would have been chosen and experienced what you did.
“If they have probability one of occurring somewhere to someone that’s often different to them having probability one of occurring to you now, whatever ‘you’ might be.”
What is meant by ‘you’ and ‘now’ is determined after the selection takes place. Whoever is selected becomes ‘you’. Whenever they are selected becomes ‘now’.
Do you agree that whoever was selected, whenever they were selected, would attach some significance to the fact that it was them at that time?
You are guaranteed to get selected, even in small world, because of the label switching.
lix,
“I wake up on average once for heads and twice for tails; therefore, on each wake-up my probability of heads is 1/3 and tails is 2/3.”
No, because each of these three wake-ups are not equally likely.
If it’s heads and she wakes up it’s monday with certainty. If it’s tails and she wakes up it’s monday with prob 0.5 and tues with prob 0.5.
So, if she wakes up it’s monday with probability 0.75 and tuesday with probability 0.25.
p(heads|wake up) =p(heads|monday)p(monday) +p(heads|tues)p(tues)
=(2/3)(3/4)+(0) (1/4)
=1/2