# Sleeping Beauty should remain pure

Image via Wikipedia

Consider the Sleeping Beauty Problem. Sleeping Beauty is put to sleep on Sunday night. A coin is tossed. If it lands heads, she is awoken once on Monday, then sleeps until the end of the experiment. If it lands tails, she is woken once on Monday, drugged to remove her memory of this event, then awoken once on Tuesday, before sleeping till the end of the experiment. The awakenings during the experiment are indistinguishable to Beauty, so when she awakens she doesn’t know what day it is or how the coin fell. The question is this: when Beauty wakes up on one of these occasions, how confident should she be that heads came up?

There are two popular answers, 1/2 and 1/3. However virtually everyone agrees that if Sleeping Beauty should learn that it is Monday, her credence in Tails should be reduced by half, from whatever it was initially. So ‘Halfers’ come to think heads has a 2/3 chance, and ‘Thirders’ come to think they heads is as likely as tails. This is the standard Bayesian way to update, and is pretty uncontroversial.

Now consider a variation on the Sleeping Beauty Problem where Sleeping Beauty will be woken up one million times on  tails  and  only once on heads.  Again, the  probability  you  initially  put  on  heads  is  determined  by  the reasoning  principle  you  use,  but  the  probability  shift  if you are to  learn  that  you are in the first awakening will be the same either way. You will have to shift your odds by a million to one toward heads. Nick Bostrom points out that in this scenario, either before or after  this  shift you will have  to be extremely certain either of heads or of tails, and that such extreme certainty seems intuitively unjustified, either before or after knowing you are experiencing the first wakening.

Extreme Sleeping Beauty wakes up a million times on tails or once on heads. There is no choice of 'a' which doesn't lead to extreme certainty either before or after knowing she is at her first waking.

However the only alternative to this certainty is for Sleeping Beauty to  keep  odds  near  1:1 both  before  and  after  she learns she is at her first waking. This entails apparently giving up Bayesian conditionalization. Having excluded 99.9999% of the situations she may have been in where tails would have come up, Sleeping Beauty retains her previous credence in tails.

This is what Nick proposes doing however: his ‘hybrid model’ of  Sleeping  Beauty.  He  argues  that  this  does  not  violate  Bayesian conditionalization in cases such as this because Sleeping Beauty is in different indexical positions before and after knowing that she is at her first waking, so her observer-moments (thick time-slices of a person) at the different times need not agree.

I disagree, as I shall explain. Briefly, the disagreement between different observer-moments should not occur and is deeper than it first seems, the existing arguments against so called non-indexical conditioning also fall against the hybrid model, and Nick fails in his effort to show that Beauty won’t predictably lose money gambling.

### Is hybrid Beauty Bayesian?

Nick argues first that a Bayesian may accept having 50:50 credences both before and after knowing that it is Monday, then claims that one should do so, given the absurdities of the Extreme Sleeping Beauty problem above and variants of it. His argument for the first part is as follows (or see p10). There are actually five rather than three relevant indexical positions in the Sleeping Beauty Problem. The extra two are Sleeping Beauty after she knows it is Monday under both heads and tails. He explains that it is the ignorant Beauties who should think the chance of Heads is half, and the informed Mondayers who should think the chance of Heads is still half conditional on it being Monday. Since these are observer-moments in different locations, he claims there is no inconsistency, and Bayesian conditionalization is upheld (presumably meaning that each observer-moment has a self-consistent set of beliefs).

He generalizes that one need not believe P(X) = A, just because one used to think P(X|E) = A and one just learned E. For that to be true the probability of X given that you don’t know E but will learn it would have to be equal to the probability of X given that you do know E but previously did not. Basically, conditional probabilities must not suddenly change just as you learn the conditions hold.

Why exactly a conditional probability might do this is left to the reader’s imagination. In this case Nick infers that it must have happened somehow, as no apparently consistent set of beliefs will save us from making strong updates in the Extreme Sleeping Beauty case and variations on it.

If receiving new evidence gives one leave to break consistency with any previous beliefs on grounds that ones conditional credences may have  changed with ones location, there would be little left of Bayesian conditioning in practice. Normal Bayesian conditioning is remarkably successful then, if we are to learn that a huge range of other inferences were equally well supported in any case of its use.

Nick’s calling Beauty’s unchanging belief in even odds consistent for a Bayesian is not because these beliefs meet some sort of Bayesian constraint, but because he is assuming there are not constraints on the relationship between the beliefs of different Bayesian observer-moments. By this reasoning, any set of internally consistent belief sets can be ‘Bayesian’. In the present case we chose our beliefs by a powerful disinclination toward making certain updates. We should admit it is this intuition driving our probability assignments then, and not call it a variant of Bayesianism. And once we have stopped calling it Bayesianism, we must ask if the intuitions that motivate it really have the force behind them that the intuitions supporting Bayesianism in temporally extended people do.

### Should observer-moments disagree?

Nick’s argument works by distinguishing every part of Beauty with different information as a different observer. This is used to allow them to safely hold inconsistent beliefs with one another. So this argument is defeated if Bayesians should agree with one another, when they know one anothers’ posteriors, share priors and know one another to be rational. Aumann‘s agreement theorem does indeed show this. There is a slight complication in that the disagreement is over probabilities conditional on different locations, but the locations are related in a known way, so it appears they can be converted to disagreement over the same question. For instance past Beauty has a belief about the probability of heads conditional on her being followed by a Beauty who knows it is Monday, and Future Beauty has a belief conditional on the Beauty in her past being followed by one who knows it is Monday (which she now knows it was).

Intuitively, there is still only one truth, and consistency is a tool for approaching it. Dividing people into a lot of disagreeing parts so that they are consistent by some definition is like paying someone to walk your pedometer in order to get fit.

Consider the disagreement between observer-moments in more detail. For  instance,  suppose  before  Sleeping  Beauty  knows what  day  it  is  she assigns  50  percent  probability  to  heads  having  landed.  Suppose  she  then  learns  that  it  is Monday, and still believes she has a 50 percent chance of heads. Lets call the ignorant observer-moment Amy and the later moment who knows it is Monday Betty.

Amy and Betty do not merely come to different conclusions with different indexical  information. Betty believes Amy was wrong, given only the information Amy had. Amy thought that conditional on being followed by an observer-moment who knew it was Monday, the chances of Heads were 2/3. Betty knows this, and knows nothing else except that Amy was indeed followed by an observer-moment who knows it is Monday, yet believes the chances of heads are in fact half. Betty agrees with the reasoning principle Amy used. She also agrees with Amy’s priors. She agrees that were she in Amy’s position, she would have the same beliefs Amy has. Betty also knows that though her location in the world  has changed, she is in the same objective world as Amy – either Heads or Tails came up for both of them. Yet Betty must knowingly disagree with Amy about how likely that  world is to be one where Heads landed. Neither Betty nor Amy can argue that her belief about their shared world is more likely to be correct than the other’s. If this principle is even a step in the right direction then, these observer-moments could do better by aggregating their apparently messy estimates of reality.

### Identity with other unlikely anthropic principles

Though I don’t think Nick mentions it, the hybrid model reasoning is structurally identical to SSSA using the reference class of ‘people with exactly one’s current experience’, both before and after  receiving  evidence  (different  reference  classes  in  each  case since they have different information). In both cases every member of Sleeping Beauty’s reference class shares the same experience. This means the proportion of her reference class who share her current experiences is always one. This allows Sleeping Beauty to stick with the fifty percent chance given by  the coin, both before and after knowing she is in her first waking, without any interference from changing potential locations.

SSSA with such a narrow reference class is exactly analogous to non-indexical conditioning, where ‘I observe X’ is interpreted as ‘X is observed by someone in the world’. Under both, possible worlds where your experience occurs nowhere are excluded and all other worlds retain their prior probablities, normalized. Nick has criticised non-indexical conditioning because it leads to an inability to update on most evidence, thus prohibiting science for instance. Since most people are quite confident that it is possible to do science, they are implicitly confident that non-indexical conditioning is well off the mark. This implies that SSSA using the narrowest reference class is just as implausible, except that it may be more readily traded for SSSA with other reference classes when it gives unwanted results. Nick has suggested SSA should be used with a broader reference class for this reason (e.g. see Anthropic Bias p181), though he also supports using different reference classes at different times.

These reasoning principles are more appealing in the Extreme Sleeping Beauty case, because our intuition there is to not update on evidence. However if we pick different principles for different circumstances according to which conclusions suit us, we aren’t using those principles, we are using our intuitions. There isn’t necessarily anything inherently wrong with using intuitions, but when there are reasoning principles available that have been supported by a mesh of intuitively correct reasoning and experience, a single untested intuition would seem to need some very strong backing to compete.

### Beauty will be terrible at gambling

It first seems that Hybrid Beauty can be Dutch-Booked (offered a collection of bets she would accept and which would lead to certain loss for her), which suggests she is being irrational. Nick gives an example:

Upon awakening, on both Monday and Tuesday,
before either knows what day it is, the bookie offers Beauty the following bet:

Beauty gets \$10 if HEADS and MONDAY.
Beauty pays \$20 if TAILS and MONDAY.
(If TUESDAY, then no money changes hands.)

On Monday, after both the bookie and Beauty have been informed that it is
Monday, the bookie offers Beauty a further bet:

Beauty gets \$15 if TAILS.

If Beauty accepts these bets, she will emerge \$5 poorer.

Nick argues that Sleeping Beauty should not accept the first bet, because the bet will have to be made twice if tails comes up and only once if heads does, so that Sleeping Beauty isn’t informed about which waking she is in by whether she is offered a bet. It is known that when a bet on A vs. B will be made more times conditional on A than conditional on B, it can be irrational to bet according to the odds you assign to A vs. B. Nick illustrates:

…suppose you assign credence 9/10 to the proposition that the trillionth digit in the decimal expansion of π is some number other than 7. A man from the city wants to bet against you: he says he has a gut feeling that the digit is number 7, and he offers you even odds – a dollar for a dollar. Seems fine, but there is a catch: if the digit is number 7, then you will have to repeat exactly the same bet with him one hundred times; otherwise there will just be one bet. If this proviso is specified in the contract, the real bet that is being offered you is one where you get \$1 if the digit is not 7 and you lose \$100 if it is 7.

However  in these cases the problem stems from the bet being paid out many times under one circumstance. Making extra bets that will never be paid out cannot affect the value of a set of bets. Imagine the aforementioned city man offered his deal, but added that all the bets other than your first one would be called off once you had made your first one. You would be in the same situation as if the bet had not included his catch to begin with. It would be an ordinary bet, and you should be willing to bet at the obvious odds. The same goes for Sleeping Beauty.

We can see this more generally. Suppose E(x) is the expected value of x, P(Si) is probability of situation i arising, and V(i) is the value to you if it arises. A bet consists of a set of gains or losses to you assigned to situations that may arise.

E(bet) = P(S1)*V(S1) + P(S2)*V(S2) + …

The City Man’s offered bet is bad because it has a large number of terms with negative value and relatively high probability, since they occur together rather than being mutually exclusive in the usual fashion. It is a trick because it is presented at first as if there were only one term with negative value.

Where bets will be written off in certain situations, V(Si) is zero in the terms corresponding to those situations, so the whole terms are also zero, and may as well not exist. This means the first bet Sleeping Beauty is offered in her Dutch-booking test should be made at the same odds as if she would only bet once on either coin outcome. Thus she should take the bet, and will be Dutch booked.

### Conclusion

In sum, Nick’s hybrid model is not a new kind of Bayesian updating, but use of a supposed loophole where Bayesianism is supposed to have few requirements. There doesn’t even seem to be a loophole there however, and if there were it would be a huge impediment to most practical uses of updating. Reasoning principles which are arguably identical to the hybrid model in the relevant ways have been previously discarded by most due to their obstruction of science among other things.  Last, Sleeping Beauty really will lose bets if she adopts the hybrid model and is otherwise sensible.

### 20 responses to “Sleeping Beauty should remain pure”

1. You are basically arguing for common priors across different indexical contexts, and your arguments repeat standard arguments for common priors. Common priors allow principled beliefs, a common science, and avoid disagreements and Dutch books.

2. Carl Shulman

The betting part goes through fine if you would cooperate in a PD with your mirror-duplicate. Arntzenius discusses it in this paper: http://www.stanford.edu/~joelv/teaching/184/arntzenius%20-%20reflections%20on%20sleeping%20beauty.pdf

I know Nick endorses one-boxing on Newcomb problems, so he may not mind the problem for conventional CDTers. [Arntzenius buys two-boxing CDT, and in the article says that it would be peculiar to have an interaction between epistemology and decision theory in Sleeping Beauty.]

• When you say the betting part goes through fine, do you mean that SB can or cannot be Dutch booked?

• Carl Shulman

Can’t be dutch booked.

• Then I don’t see how his argument is relevant to that. Arntzenius is discussing a situation where bets on Tuesday count for something. Without that how does one’s decision theory have any effect?

• Carl Shulman

I wake up and get offered the bet. If I’m using SSSA with the narrowest reference class, I assign probability 0.5 to Heads-Monday, 0.25 to Tails-Monday, and 0.25 to Tails-Tuesday.

If I’m a two-boxer on Newcomb’s problem, then I compute the expected utility of taking the first bet (get \$10 if it’s Heads-Monday, pay \$20 if it’s Tails-Monday) as (\$10)(probability 0.5 of Heads-Monday)-(\$20)(0.25 probability of )=0 and am indifferent about taking it. So I can be Dutch booked.

If I’m a one-boxer, then, conditional on it being Tails, I think of my choice as determining my bets on both Monday and Tuesday. Whether it’s Tails-Monday or Tails-Tuesday, choosing to bet means losing \$20. So now the computation goes (\$10)(0.5 probability of Heads)-(\$20)(0.5 probability of Tails)=-\$5. So I won’t take the first bet, and won’t be Dutch Booked.

• It’s true in general that if there are other results of your actions you may not want to bet at the odds you believe.

But suppose on any given waking you don’t care about the value accrued by other wakings in future or counterfactually. Perhaps different people are woken on each occasion for instance, or you are in God’s Coin Toss not SB. Then everyone should bet at their real odds, and be dutch booked if they follow the hybrid model.

• Carl Shulman

Sure, the hybrid agent in God’s coin toss will take a Dutch book, and the hybrid agent at any given time would like to stop making these anthropic updates, i.e. the hybrid decision model is self-effacing. What you would like to do is fix your current probability distribution over worlds and then have your future selves implement the policy you would have decided on given your anthropic situation at the time of commitment.

But SIA is self-effacing too. Say I flip a coin to determine whether I will produce a duplicate of you on Tuesday (in a mirrored room, with mirrored me, etc: Elga’s Dr Evil setup) that you don’t care about. If tails, then I duplicate, if heads, then I don’t. On Monday I ask you whether you will take a bet that pays you \$15 if HEADS, and costs \$15 if TAILS. Then, on Tuesday I ask whether you will accept a bet that pays you \$10 if TAILS and costs you \$20 if HEADS. So you predictably will take a dutch book and lose \$5.

If you could, on Monday you would like to modify yourself so as not to make that anthropic update when you learn it’s Tuesday. Likewise, the hybrid agent in God’s Coin toss who doesn’t know the day would like to avoid making the anthropic update upon learning it’s Monday.

• Me on Monday would like to modify itself only because the future selves it cares about are disproportionately those around under heads and thus paying (i.e. it is supposed that the duplicate’s winnings do not please Monday me). You could set up such a situation with any reasoning principles: take a bet which would be sensible for the participants, take another agent who cares more about those who would lose the bet than those who would win. Make this agent the past self of the lot of them. He would like to modify them to not bet. So what?

3. Carl Shulman
4. Phil Koop

“However virtually everyone agrees that if Sleeping Beauty should learn that it is Monday, her credence in Tails should be reduced by half …”

Interesting. That’s news to me.

“So ‘Halfers’ come to think heads has a 2/3 chance, and ‘Thirders’ come to think they heads is as likely as tails.”

Well, that’s not “reduced by half.” But never mind.

“This is the standard Bayesian way to update, and is pretty uncontroversial.”

I cannot follow this assertion. The Bayesian update is:

P(H|M) = P(M|H)P(H) / P(M)

Presumably everyone believes that P(M|H) = 1, since the problem definition says so. Thirders believe that P(H) = 1/3 and P(M) = 2/3. That is consistent with a Bayesian update: (1/3) / (2/3) = 1/2.

What do halfers believe? P(H) = 1/2. What about P(M)? I don’t know directly, but you have told me that they believe P(H|M) = 2/3, and that they are Bayesian updaters. In that case, they must believe that P(M) = 3/4. Why do they believe this? Honest question: Sleeping Beauty is a turnoff for me and I haven’t been paying attention.

• Why a halfer believes there is a 3/4 chance of it being Monday:

P(M)= P(M|H).P(H)+P(M|T).P(T)
=1 * 1/2 + 1/2 * 1/2
=3/4

5. Phil Koop

6. Phil Koop

“However in these cases the problem stems from the bet being paid out many times under one circumstance. Making extra bets that will never be paid out cannot affect the value of a set of bets.”

Isn’t that the entire substance of Sleeping Beauty though? Halfers and thirders are quoting different probabilities for different events; the difference between these events is exactly whether the bet must be paid out twice under once circumstance. There is no reason to believe that they have different probability measures, let alone different interpretations of probability. There is neither an interpretive nor a diachronic nor a referential paradox.

7. Sleeping Beauty

I have a hard time motivating this because I don’t share Bostrom’s intuitive qualms with Extreme Sleeping Beauty. It’s straightforward expected value. If I’m making 1,000,001 observations with probability 1/2 and 1 observation with probability 1/2, then the mean number of observations I expect to make is 500,001, and when I can’t distinguish between the observations the probability that I made any particular observation is (1/2)/(500,001). And it’s not just subjective — if you pay a nurse temp, hired just for the duration of the experiment, \$1 every time (s)he wakes me up, you’d better expect to pay on average \$500,001 even though the probability is only 1/2 that you’ll have to pay more than \$1. But if you’re cutting the nurse a check for just the first day’s work (with only one awakening) you don’t have to wait for the coin toss, you won’t go wrong by making it out for \$1. That’s a very large and entirely justified increase in certainty just because you know it’s the first awakening. Similarly, if I learn it’s the first awakening then it can’t be any of the 1,000,000 other possible observations and my information has improved radically, radically changing the odds. Where’s the paradox?

8. Michael Sikivie

The expected number of tails is the infinite series
1*(1/2)+2*(1/2)^2+3*(1/2)^3+4*(1/2)^4…..which I couldn’t tell you the exact value but I promise you it converges because as we take an infinite number of terms each value is roughly one half the value of the previous entry (geometric series). It seems to converge to a little over two.

Heads can only happen once. She wakes up each turn. So I’d say she has about 1/3 chance of heads when she wakes up, no.

This site uses Akismet to reduce spam. Learn how your comment data is processed.