Nick Bostrom showed that either position in Extreme Sleeping Beauty seems absurd, then gave a third option. I argued that his third option seems worse than either of the original pair. If I am right there that the case for Bayesian conditioning without updating on evidence fails, we have a choice of disregarding Bayesian conditioning in at least some situations, or distrusting the aversion to extreme updates as in Extreme Sleeping Beauty. The latter seems the necessary choice, given the huge disparity in evidence supporting Bayesian conditioning and that supporting these particular intuitions about large updates and strong beliefs.
Notice that both the Halfer and Thirder positions on Extreme Sleeping Beauty have very similar problems. They are seemingly opposed by the same intuitions against extreme certainty in situations where we don’t feel certain, and extreme updates in situations where we hardly feel we have any evidence. Either before or after discovering you are in the first waking, you must be very sure of how the coin came up. And between ignorance of the day and knowledge, you must change your mind drastically. If we must choose one of these positions then, it is not clear which is preferable on these grounds alone.
Now notice that the Thirder position in Extreme Sleeping Beauty is virtually identical to SIA and consequently the Presumptuous Philosopher’s position (as Nick explains, p64). From Anthropic Bias:
The Presumptuous Philosopher
39It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion, trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion, trillion, trillion observers. The super-duper symmetry considerations are indifferent between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1 (whereupon the philosopher […] appeals to SIA)!”
The Presumptuous Philosopher is like the Extreme Sleeping Beauty Thirder because they are both in one of two possible worlds with a known probability of existing, one of which has a much larger population than the other. They are both wondering which of these worlds they are in.
Is the Presumptuous Philosopher really so presumptuous? Analogous to the Extreme Sleeping Beauty Halfer then shall be the Unpresumptuous Philosopher. When the Unpresumptuous Philosopher learns there are a trillion times as many observers in T2 she remains cautiously unmoved. However, when the physicists later discover where in the cosmos our planet is under both theories, the Unpresumptuous Philosopher becomes virtually certain that the sparsely populated T1 is correct while the Presumptuous Philosopher hops back on the fence.
The Presumptuous Philosopher is often chided for being sure the universe is infinite, given there is some chance of an infinite universe existing. It should be noted that this is only as long as he cannot restrict his possible locations in it to any finite region. The Unpresumptuous Philosopher is uncertain under such circumstances. However she believes with probability one that we are in a finite world if she knows her location is within any finite region. For instance if she knows the age of her spatially finite universe she is certain that it will not continue for infinitely long. Here her presumptuous friend is quite unsure.
It seems to me that as the two positions on Extreme Sleeping Beauty are as unintuitive as each other, the two philosophers seem as presumptuous as each other. The accusation of inducing a large probability shift and encouraging ridiculous certainty is hardly an argument that can be used against the SIA-Thirder-Presumptuous Philosopher position in favor of the SSA-Halfer-Unpresumptuous Philosopher side. Since the Presumptuous Philosopher is usually considered the big argument against SIA, and not considered an argument against SSA at all, an update in favor of SIA is in order.
Yup. Any Bayesian is “presumptuous” in the sense of being willing to draw strong conclusions in the face of strong evidence.
An alternative source of anti-extreme-update intuitions is our actual uncertainty about anthropic principles. If we (dubiously) imagine SSA and SIA style principles as the only contenders, then making either extreme update would require extreme confidence that the rival account was false/ought not to be used. This would just be an instance of the general rule that extreme within-model probabilities tend to get dominated by between-model uncertainty.
Perhaps, but that shouldn’t be interpreted as evidence for/against the principles themselves then. re the dubiousness of imagining those principles are the only contenders: yes, but the same problems should arise for any starting probabilities on heads vs. tails.
“Perhaps, but that shouldn’t be interpreted as evidence for/against the principles themselves then.”
That’s why I said “an alternative source”.
Even if we had a prior that put 50% weight on SSA and 50% weight on SIA, we’d still end up drawing strong conclusions under some circumstances. It is being a Bayesian that leads to strong conclusions sometimes, not which prior you use.
they are both in one of two possible worlds with a known probability of existing
No. The T1/T2 scenario is very different from (Extreme) Sleeping Beauty, because in the second case we know the probability distribution (generated by flipping a fair coin), in the first case we don’t. Having any specific opinion of a numerical probability of T1 vs. T2 is presumptuous, it’s false precision, because we don’t know the distribution function.
If instead of “Sleeping Beauty” you substitute “Computer Program”, what would you want the program to output? 1/3 as dictated by the thirder position, or 1/2 as dictated by the halfer position?
The program will, obviously, be correct more often if it sticks to the thirder position. A program that’s correct more often seems like a better program, no?
If you disagree, consider a betting scenario where the Sleeping Beauty is paid $10 every time it correctly guesses whether the coin was heads or tails. The thirder position is the correct one if the Sleeping Beauty wants to maximize its outcome.
Given these considerations, I don’t see how there can be any dilemma. The thirder position is correct. The halfer position is incorrect.
I’m not well acquainted with this kind of problems, so let me just try an intuitive (surely naïve) assessment.
I do not think a Thirder Position implies SIA.
I’d say it implies rather this principle:
all the rest being equal, an agent should distribute credence among worlds proportional to the number of times that same agent is in the act of distributing credence in each of them.
I don’t know whether this already has a name but we could call it SNA: Self-Numbering Assumption.
To see SIA and SNA are different consider the following.
Twice the number of agents distributing credence in a world A than in a world B won’t make betting on A a more successful strategy (along iteration) for a particular agent. But this agent operating twice in A and just once in B would make it a more successful strategy betting on A along iteration.
As I see it, the concept of most successful strategy is the substance in the concept of probabilistic rationality. The concept of most successful strategy involves the concept of expected value upon iteration. In turn, iteration requires identity preservation. Perhaps when we give up this simple paradigm we just lose ground.
Pingback: SIA > SSA, part 2: Telekinesis, reference classes, and other scandals – Hands and Cities
Pingback: SIA > SSA, part 4: In defense of the presumptuous philosopher – Hands and Cities