# Anthropic principles agree on bigger future filters

I finished my honours thesis, so this blog is back on. The thesis is downloadable here and also from the blue box in the lower right sidebar. I’ll blog some other interesting bits soon.

My main point was that two popular anthropic reasoning principles, the Self Indication Assumption (SIA) and the Self Sampling Assumption (SSA), as well as Full Non-indexical Conditioning (FNC)  basically agree that future filter steps will be larger than we otherwise think, including the many future filter steps that are existential risks.

Figure 1: SIA likes possible worlds with big populations at our stage, which means small past filters, which means big future filters.

SIA says the probability of being in a possible world is proportional to the number of people it contains who you could be. SSA says it’s proportional to the fraction of people (or some other reference class) it contains who you could be. FNC says the probability of being in a possible world is proportional to the chance of anyone in that world having exactly your experiences. That chance is more the larger the population of people like you in relevant ways, so FNC generally gets similar answers to SIA. For a lengthier account of all these, see here.

SIA increases expectations of larger future filter steps because it favours smaller past filter steps. Since there is a minimum total filter size, this means it favours big future steps. This I have explained before. See Figure 1. Radford Neal has demonstrated similar results with FNC.

Figure 2: A larger filter between future stages in our reference class makes the population at our own stage a larger proportion of the total population. This increases the probability under SSA.

SSA can give a variety of results according to reference class choice. Generally it directly increases expectations of both larger future filter steps and smaller past filter steps, but only for those steps between stages of development that are at least partially included in the reference class.

For instance if the reference class includes all human-like things, perhaps it stretches from ourselves to very similar future people who have avoided many existential risks. In this case, SSA increases the chances of large filter steps between these stages, but says little about filter steps before us, or after the future people in our reference class. This is basically the Doomsday Argument – larger filters in our future mean fewer future people relative to us. See Figure 2.

Figure 3: In the world with the larger early filter, the population at many stages including ours is smaller relative to some early stages. This makes the population at our stage a smaller proportion of the whole, which makes that world less likely. (The populations at each stage are a function of the population per relevant solar system as well as the chance of a solar system reaching that stage, which is not illustrated here).

With a reference class that stretches to creatures in filter stages back before us, SSA increases the chances of smaller past filters steps between those stages. This is because those filters make observers at almost all stages of development (including ours) less plentiful relative to at least one earlier stage of creatures in our reference class. This makes those at our own stage a smaller proportion of the population of the reference class. See Figure 3.

The predictions of the different principles differ in details such as the extent of the probability shift and the effect of timing. However it is not necessary to resolve anthropic disagreement to believe we have underestimated the chances of larger filters in our future. As long as we think something like one of the above three principles is likely to be correct, we should update our expectations already.

### 30 responses to “Anthropic principles agree on bigger future filters”

1. Fantastic job; great figures. :) Therefore, alas, we are all DOOMED. :(

2. Carl Shulman

Could you post a pdf as well, for easy Kindle reading?

• The file of my thesis I put up is a pdf. I’m not sure how else to put one up, but if you tell me I’ll be glad to.

3. jsalvatier

I trust you will be at least linking to this on LW.

• I’m about to put up a bunch of related posts. I might summarize the lot on LW at the end, if no regular LW writers have.

• Larks

Hey, do you mind if I knock up a quick summary for the discussion section? I don’t want to steal your glory or anything, but I’m sure LW would love to get its hands on this.

4. Does your work imply that we should put more effort into creating an intelligence explosion?

Let’s say mankind has four fates:

(A) We don’t create an intelligence explosion and colonize the galaxy.
(B) We don’t create an intelligence explosion and soon go extinct.
(C) We create a utopian intelligence explosion and colonize the galaxy.
(D) We unintentionally create a malevolent AI god that captures all the free energy in the galaxy and so destroys all life other than itself.

Your work, if I understand it correctly, shows that (B) is almost certainly our fate. But your work shouldn’t influence our belief about the probability of (C) relative to the probability of (D). Let’s assume that knowing we would be in (C) or (D) would increase our estimate of our chances of survival.

Let’s now assume that if we put more resources into AI research we increase both the probability of (C) and (D) but don’t lower the [probability of (C)]/[Probability of (D)]. Does your work show that we shouldn’t be able to significantly raise the probabilities of (C) and (D) but to the extent we could raise these probabilities we would have a greater chance of survival?

Now let’s assume there is a fate (E) in which to avoid the great filter we seek to create an AI god that will create a utopia on Earth but will prevent us from ever leaving our solar system. Should it be easier to achieve (E) than (C) perhaps because (E) makes it harder to apply the anthropic principle?

• Quite possibly, but there are other considerations I will write more about soon.

I’m not sure what you mean in your last paragraph by ‘(E) makes it harder to apply the anthropic principle’ – do you mean that outcome is not vastly reduced in probability by either anthropic principle, so should be easier to achieve? In that case yes that outcome isn’t reduced much in probability, but it sounds pretty unlikely to be a large part of the filter to begin with, without reason for civilizations to begin such behaviour.

• By “makes it harder to apply the anthropic principle” I meant that committing to change your future population levels for anthropic reasons creates unintuitive results (such as it being easier to achieve (E) than (C)) and perhaps these unintuitive results arise because the anthropic principle doesn’t apply to situations in which they will be encountered.

If we have some scientific theory which says we are doomed but our theory doesn’t seem to make sense if X is zero than we should seek to make X zero.

5. The thesis is downloadable from the blue box in the lower right sidebar.

I may be looking right through it, but I can’t see this blue box. In the right sidebar I see the following subheadings:

Popular now:
Subscribe:
Email Subscription
Archives

Thanks!

• I added another link in that sentence so you can download it from elsewhere. Not sure what’s up with the blue box – should be just before archives in that list.

6. This kind of argument doesn’t seem to work very well if we are in the future *already* – inside a simulation.

7. We know more than that we are “human-like things”. We know all kinds of things about the world – including what historical era we were born into. We don’t need to consider the possibility that we are future creatures – because we already know that we aren’t. Hide that information from us from birth – and we might reason that way – but the more facts you hide from an agent, the more likely it is to draw the wrong conclusions.

• William B Swift

Among other things we know that there were probably very severe early filters. See Ward and Brownlee, Rare Earth for a summary. Their conclusion is that, because of all the things that could have gone differently in the early Earth, primitive (bacterial) life is likely to be fairly common, but multi-cellular life, much less intelligent life, is probably much less common than previously thought.

8. This paper cites Radford Neal’s paper “Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning”.

Yet Radford Neal’s paper includes sections entitled:

“Why the Doomsday Argument must be wrong” and “Defusing the Doomsday Argument with SIA or FNC”.

He says things like: “there are several reasons for rejecting the Doomsday Argument that I regard as convincing, even without a detailed understanding of why it is wrong.” and “Similar arguments refute the form of the Doomsday Argument where there are many intelligent species.”

He is evidently *against* Doomsday-related arguments, yet the citation here apparently suggests that he somehow *favours* them. What gives?

9. Carl Shulman

He’s against SSA Doomsday arguments, but his counter produces its own Doomsday argument, which he likes better since it is affected by more sorts of empirical evidence. The FNC Doomsday argument is near the end of the paper, with discussion of simulation, etc.

10. I think that the chances of anyone 5 years in the future having “exactly my experiences” is pretty miniscule. People fitting that description will not exist any more. Instead there will be people with much fancier mobile phones.

11. I predict new adherents of quantum immortality joining me. :)

With quantum immortality, it doesn’t matter if there’s a great filter ahead of us, as long as the probability of dying is not 100% and the probability of indefinite dystopia is low.

Any takers?

• JenniferRM

Based on my understanding of quantum immortality, it implies that honest adherents (as you claim to be) will suicide in all situations that don’t meet with your special approval because you don’t care about your measure, just the quality of your experiences in the few quantum narratives where you exist.

If you had really set up the experiment/manipulation correctly (say, with a automatic kill switch based on some measure or another) shouldn’t you be dead in almost all the quantum narratives that contain me? And in the one’s where I see you being “not dead”, shouldn’t you have won the lottery several times by now?

• To clarify, I didn’t mean that I will be committing suicide. What I meant is that a filter step that kills everybody is not relevant because it just reduces our measure.

Personally, I don’t care about the global measure. But my friends/family and I do care about not being separated, so we’re not about to perform quantum suicide. A group quantum suicide setup has a good chance of malfunctioning in a way that separates people. This is because there will be a good chance of some dying while others don’t.

12. There’s alot of big words on this page, but you guys can’t figure out how to create free energy huh?

13. Manfred

From the perspective of creating a prior probability distribution over numbers of [insert reference class] as a function of time, this approach seems a bit problematic because of normalization problems – it assigns infinite relative probability to infinite numbers of people. I feel like it would be more profitable to include more data about our world, enough so that we get a normalizable distribution for something like “humans living in the year 2010.” Furthermore, including more information should quite quickly improve our chances, since it will remove that pesky infinity that’s tipping the scales against us.

14. xd

This is just the fermi paradox and it can’t be distilled down to this without answering some fundamental questions. We are just like everybody else (there’s lots of us) but then where are they? OR we are just like everybody else (i.e. our chances of existing are vanishingly small) .

If it’s the first then we’re in the fermi paradox. If it’s the second then we can’t say anything because all we know is that the chance of us existing is vanishingly small. We do not have the information to say that since our population is “large” then it’s statistically likely to fall. We just don’t have the answers to say whether we’re out there in the 98th percentile or not.

And the fermi paradox is only a paradox if in fact we accept the assumption that intelligent species go traveling from star system to star system. Maybe they don’t.

This site uses Akismet to reduce spam. Learn how your comment data is processed.