Here are a couple of contending general principles for anthropic reasoning. The question they are trying to answer is how to update credences in situations where your own existence might be evidence. This is particularly an issue where different hypotheses posit different numbers of observers. This summary is incomplete.
Self Sampling Assumption (SSA)
SSA says you are more likely to be in worlds where a greater proportion of people are like you.
Except that ‘people’ in the above sentence can be any set of things you could have been in some sense, even if you currently know you are not some of them. This group is called a ‘reference class’. How to choose which reference class to use is an unsolved problem.
This was developed by Nick Bostrom, and is discussed at length in his book Anthropic Bias. He also developed the Strong Self Sampling Assumption (SSSA), which is like SSA but treats observer moments as the relevant unit, rather than whole temporally extended observers.
From Nick Bostrom’s Anthropic Bias:
Incubator, version I
Stage (a): In an otherwise empty world, a machine called “the incubator”
3 kicks into action. It starts by tossing a fair coin. If the coin falls
tails then it creates one room and a man with a black beard inside it.
If the coin falls heads then it creates two rooms, one with a blackbearded
man and one with a white-bearded man. As the rooms are
completely dark, nobody knows his beard color. Everybody who’s
been created is informed about all of the above. You find yourself in
one of the rooms. Question: What should be your credence that the
coin fell tails?
Stage (b): A little later, the lights are switched on, and you discover
that you have a black beard. Question: What should your credence
in Tails be now?
a) There is only one reference class you can use: similarly ignorant observers. You could be any of the ignorant observers in either possible world, so the conditional probability of being someone like you given heads came up is 1, and the same for tails. Discovering that you exist alone has told you nothing; your credence remains 1/2. Intuitively, however the coin landed the observations you are making would be made.
b) When the light turns on you have two choices of reference class: all people, or all people who know they are black beards. If you choose all people, then on tails 100% of the people could be you (as he has a black beard), while on heads only 50% could. Thus your new credence in tails is 2/3. If you use the reference class of people who know they are black beards, you must use SSSA because your earlier self did not know it’s beard colour, so can’t be part of your reference class. Then everyone in your reference class in either world could be you still, so your credence is still ½. Notice that this last application appears to fail conservation of expected evidence. This is potentially legitimate because the reasoning is from the perspective of different observer moments (see Anthropic Bias p165).
Many implications of SSA depend on choice of reference class and assumptions about observers outside the issue at hand. So these are merely possible implications:
The halfer position in sleeping beauty (or the thirder position if you assume an infinite number of outside observers)
Arguments against SSA (i.e. implications that are too interesting)
- The reference class problem: outcomes are often very sensitive to the choice of reference class, which is arbitrary at this point (e.g. see discussion in Nick Bostrom’s Anthropic Bias)
- With some choices of reference class, SSA gives counterintuitive results, including backward causation and incredible predictive powers.
- It appears to imply that the relevant ‘you’ would necessarily have existed in any world if you exist now (otherwise you would update on finding yourself existing in the dark in the incubator above) (I think this discusses, though it was down at time of writing)
- It contains a discontinuity; worlds with any positive number of observers are exactly as likely upon observing your existence as they were before, but those with zero are impossible (Olum 00).
- It means that your beliefs about the existence of observers in causally disconnected regions should affect your credences about what happens here (Olum 00).
- If there are two existing groups and you don’t know which you are in, you should uncontroversially think you are in the larger one. It’s not obvious why you should change your mind if you learn the one you aren’t in has been destroyed, was destroyed immediately, or was never created.
Self Indication Assumption (SIA)
SIA says you are more likely to be in worlds where there are a greater number of people like you (where ‘like you’ means that you can’t yet distinguish which one you are).
Start for instance by taking into account only that you are an observer. To use SIA, multiply the chance of each world existing by its population of observers, and renormalize. You can update later on anything else you know about yourself, and you will get the same result as if at the start you only counted people for whom those things were true.
Various people have used this, originally to get around the doomsday argument. Ken Olum has most thoroughly supported it, as far as I’ve seen.
In the Incubator above:
a) On knowing that you exist in a room, you think heads is twice as likely as tails. This is because heads would mean twice as many people are alive who could be you.
b) If you are alive in heads you have 1/2 a chance of having a black beard, whereas if you are alive in tails you have a 1 chance. So when you find out you have a black beard you update back toward tails, and end with a credence of 1/2 on each. Notice you would get the same answer if instead of doing a) then updating on black beards you just scaled each world by it’s population of black beards originally.
The thirder position in sleeping beauty.
The doomsday argument is perfectly countered; the chance of doom returns to your prior.
The universe is almost certain to contain infinitely many observers .
The great filter is probably ahead of us.
Arguments against SIA
- The presumptuous philosopher: if we had two theories that looked equally plausible on other grounds, where one posited a trillion times as many observers in the universe, we should be almost sure that hypothesis is true (also by Bostrom). Also see counter.
- Implies virtual certainty that the universe is infinite, which seems to many an implausibly strong conclusion for such modest evidence.
- Implies virtual certainty that absurd hypotheses that we can’t rule out entirely, but which contain many more observers than we observe, are true. For instance the hypothesis that for every planet we see, there are 10^10^100 corresponding to it on ‘other planes’, filled with observers (Olum ’00 again).
- It seems wrong to update on observations indistinguishable from those that were bound to be made (i.e. should not update on existing when someone was going to exist in any case, and you don’t know any more about who you are)
- It involves updating without new relevant information (as in sleeping beauty). What relevant thing has Beauty learned when she wakes up? (Lewis 01)
- We select people to update on the existence of from those who exist. This is like pulling a ball out of an urn which may be huge or tiny, reading the number on it, and updating because any given number was more likely to be in the big urn (the argument against this analogy).
The best places I know of to read more about all this in general
Pingback: Anthropic summary « Meteuphoric
Pingback: Who observes being you, and do they cheat in SIA? « Meteuphoric
Incubator: for (a) and (b) I’d say 1/2. I have a black beard just means at least one conscious observer has a black beard, which was a probability 1 event (hence no updating).
Sleeping Beauty: she has no new knowledge when she wakes up. I see two valid solutions: 1/2 or 0 (not 1/3). http://neq1.wordpress.com/2010/04/29/sleeping-beauty/
Why is the Universe the way it is? Why do the fundamental constants of nature have the values they do? Could they have been different? In particular, if the Universe had started out with different conditions would we, or someone like us, still be here to ponder over this existence?
Pingback: My plans | Meteuphoric
Pingback: Anthropic principles agree on bigger future filters | Meteuphoric
Pingback: SIA says AI is no big threat | Meteuphoric
Pingback: SIA and the Two Dimensional Doomsday Argument | Meteuphoric
Could you fill out the implications of SSA section a bit? I had no idea what you meant in this context by either the doomsday argument or sleeping beauty.
Pingback: Sleeping Beauty should remain pure | Meteuphoric
Pingback: The Unpresumptuous Philosopher | Meteuphoric
Pingback: Agreement on anthropics | Meteuphoric
Pingback: Person moments make sense of anthropics | Meteuphoric
Pingback: On the Anthropic Trilemma | Meteuphoric
Pingback: Suspicious arguments regarding cow counting | Meteuphoric
Pingback: Anthropic Principle Primer « Prince Mm-mm
>It contains a discontinuity; worlds with any positive number of observers are exactly as likely upon observing your existence as they were before, but those with zero are impossible.
This objection applies equally well to Solomonoff induction.
> It means that your beliefs about the existence of observers in causally disconnected regions should affect your credences about what happens here
This *also* applies equally well to Solomonoff induction.
Could you elaborate?
OK, apparently this was much less clear than I thought because it took me a full minute to reconstruct it. I was thinking of Paul’s argument in What does the universal prior look like?. Solomonoff induction doesn’t care whether you’re in base-level reality or a simulation, so your credence about what happens next is distributed over all the observers observing the same data.
I found it helpful to map the example to a balls-in-urns problem. You have two urns. The Heads urn contains a black and a white ball, and the Tails urn contains a black ball only.
In (a) you’ve drawn a ball but not looked at it yet. SSA says this gives you no info about which urn you have. SIA says you can draw twice as many balls from the Heads urn so you’re twice as likely to have the Heads urn conditional on making a successful draw at all. Under SIA, if you want to estimate how many balls there are in total, you’ll have a prior expectation of 1/2 + 2/2 = 3/2, but a posterior expectation – with no additional info than that you’ve drawn a ball from one of two urns, both of which contain balls – of 1/3 + 2 * 2/3 = 5/3. (SSA refuses to update here.)
Bostrom is arguing that drawing a black ball from an urn shouldn’t do anything to your probability distribution over urns, even though one has only black balls and in the other half the balls are white.
Pingback: AI Impacts – AI Risk Terminology
Very nice summary of a bunch of the issues! I have some thoughts roughly on SIA and SSA that are now like a decade old but will one day actually be published when I finally get around to making some required revisions… https://static1.squarespace.com/static/55d3621de4b07a4744ce4a23/t/55f91ca4e4b0fa2519ad7a73/1442389156509/OBARS.pdf
Controversies about anthropic reasoning can all be explained by perspective inconsistency. Including most problems listed in this well written summary.
When solving a probability problem we can think in two distinct ways. One way is the first person perspective. Here I gather information by perception and try to make a rational judgement in my mind. The other way is the third person perspective. It treats rational thinking as a abstract process independent of “me”. The answer to a question is what a imaginary observer with the available information should make. For most problems either perspective follows the exact same logical reasoning and gives the same answer. As a result we usually don’t pay attention to their differences or try to differentiate them.
However there are important differences between the two. From first person perspective I can inherently identify myself from other people, simply because by introspection I can identify myself from the rest of the world. From third person perspective introspection no loner applies. Therefore everybody is treated equally and identification must be based on people’s differences. For example, to identify a identical twin a third party have to find what’s unique between them or even artificially make one, by giving them different names e.g. But from the perspective of the twins, even if they don’t know their differences or don’t have names, they would not confuse the other person as himself.
Obviously one would always conclude oneself exists (cogito ergo sum). So from first person perspective a person cannot think scenarios where he does not. E.g.I simply cannot imagine what my perception and mind would be like if I do not exist. Because that is self contradictory. However, from third person perspective because I am treated the same way as anybody, it is possible to consider the possibilities of me not existing (here “me” has to be identified in third person as well) As a result information about my chance of existence is only relevant in third person perspective.
In summary we can either reason in first person perspective and uses self identification by introspection, or reason in third person perspective and uses information about my chances of existence. Not both. However most anthropic arguments do use both. This include doomsday argument, simulation argument, presumptuous philosopher, even SIA and SSA themselves. Therefore recognizing the importance of perspective reasoning is the key to understand all that’s wrong with these arguments.
A more detailed counter argument against Doomsday argument (as well as SSA and SIA) can be found here:
Pingback: SIA > SSA, part 1: Learning from the fact that you exist – Hands and Cities