Category Archives: 1

Anthropic principles agree on bigger future filters

I finished my honours thesis, so this blog is back on. The thesis is downloadable here.  I’ll blog some other interesting bits soon.

My main point was that two popular anthropic reasoning principles, the Self Indication Assumption (SIA) and the Self Sampling Assumption (SSA), as well as Full Non-indexical Conditioning (FNC)  basically agree that future filter steps will be larger than we otherwise think, including the many future filter steps that are existential risks.

Figure 1: SIA likes possible worlds with big populations at our stage, which means small past filters, which means big future filters.

SIA says the probability of being in a possible world is proportional to the number of people it contains who you could be. SSA says it’s proportional to the fraction of people (or some other reference class) it contains who you could be. FNC says the probability of being in a possible world is proportional to the chance of anyone in that world having exactly your experiences. That chance is more the larger the population of people like you in relevant ways, so FNC generally gets similar answers to SIA. For a lengthier account of all these, see here.

SIA increases expectations of larger future filter steps because it favours smaller past filter steps. Since there is a minimum total filter size, this means it favours big future steps. This I have explained before. See Figure 1. Radford Neal has demonstrated similar results with FNC.

Figure 2: A larger filter between future stages in our reference class makes the population at our own stage a larger proportion of the total population. This increases the probability under SSA.

SSA can give a variety of results according to reference class choice. Generally it directly increases expectations of both larger future filter steps and smaller past filter steps, but only for those steps between stages of development that are at least partially included in the reference class.

For instance if the reference class includes all human-like things, perhaps it stretches from ourselves to very similar future people who have avoided many existential risks. In this case, SSA increases the chances of large filter steps between these stages, but says little about filter steps before us, or after the future people in our reference class. This is basically the Doomsday Argument – larger filters in our future mean fewer future people relative to us. See Figure 2.

Figure 3: In the world with the larger early filter, the population at many stages including ours is smaller relative to some early stages. This makes the population at our stage a smaller proportion of the whole, which makes that world less likely. (The populations at each stage are a function of the population per relevant solar system as well as the chance of a solar system reaching that stage, which is not illustrated here).

With a reference class that stretches to creatures in filter stages back before us, SSA increases the chances of smaller past filters steps between those stages. This is because those filters make observers at almost all stages of development (including ours) less plentiful relative to at least one earlier stage of creatures in our reference class. This makes those at our own stage a smaller proportion of the population of the reference class. See Figure 3.

The predictions of the different principles differ in details such as the extent of the probability shift and the effect of timing. However it is not necessary to resolve anthropic disagreement to believe we have underestimated the chances of larger filters in our future. As long as we think something like one of the above three principles is likely to be correct, we should update our expectations already.

[Edited 2021 to change download options]

When is investment procrastination?

I suggested recently that the link between procrastination and perfectionism has to do with construal level theory:

When you picture getting started straight away the close temporal distance puts you in near mode, where you see all the detailed impediments to doing a perfect job. When you think of doing the task in the future some time, trade-offs and barriers vanish and the glorious final goal becomes more vivid. So it always seems like you will do a great job in the future, whereas right now progress is depressingly slow and complicated.

This set of thoughts reminds me of those generally present when I consider the likely outcomes of getting further qualifications vs. employment, and of giving my altruistically intended savings to the best cause I can find now vs. accruing interest and spending them later. In general the effect could apply to any question of how long to prepare for something before you go out and do it. Do procrastinators invest more?


Meet science: rationalizing evidence to save beliefs

This is how science classes mostly went in high school. We would learn about a topic that had been discovered scientifically, for instance that if you add together two particular solutions of ions, some of the ions will precipitate out as a solid salt. Then we would do an experiment, wherein we would add the requisite solutions and get something entirely wrong in its color, smell, quantity, or presence. Then we would write a report with our hypothesis, the contradictory results, and a long discussion about all the mistakes that could be to blame for this unexpected result, and conclude that the real answer was probably still what we hypothesized (since we read that in a book).

Given that they had not taught the children anything about priors, this seems like a strange way to demonstrate science.

Poverty does not respond to incentives

I wrote a post a while back saying that preventing ‘exploitative’ trade is equivalent to preventing an armed threat by eliminating the ‘not getting shot in the head’ option. Some people countered this argument by saying that it doesn’t account for how others respond. If poor people take the option of being ‘exploited’, they won’t get offered such good alternatives in future as they will if they hold out.

This seems unlikely, but it reminds me of a real difference between these situations. If you forcibly prevent the person with the gun to their head from responding to the threat, the person holding the gun will generally want to escape making the threat, as now she has nothing to gain and everything to lose. The world on the other hand will not relent from making people poor if you prevent the poor people from responding to it.

I wonder if the misintuition about the world treating people better if they can’t give in to its ‘coercion’ is a result of familiarity with how single agent threateners behave in this situation. As a side note, this makes preventing ‘exploitative’ trade worse relative to preventing threatened parties acting on threats.

Nothing wastes resources like saving them

Imagine you find yourself in possession of a diamond mine. However you don’t like diamonds very much; you think they are vastly overvalued compared to important resources such as soil. You are horrified that people waste good soil in their front gardens where they are growing nothing of much use, and think it would be better if they decorated with a big pile of this useless carbon crystal. What do you do?

a) Cover your own lawn with diamonds

b) Donate as many diamonds as you can for free to anyone who might use them to decorate where they would use soil

c) Sell the diamonds. Buy something you do value.

d) Something else

Environmentalism often takes the form of the conviction that human labor should take the place of other resource use. Bikes should be ridden instead of cars, repair is superior to replacement, washing and sorting recycling is better than using up tip space, and so on. This is usually called ‘saving resources’ not ‘using up more valuable resources’. One might argue that while human labor is usually relatively expensive (you can generally make much more selling five minutes of time than a liter of tip space and a couple of cans worth of clean used steel), environmentalists often consider the other resources to be truly more valuable, often because they are non-renewable and need to be shared between everyone in the future too. Even so, since when is it sensible to treat your overvalued resources as if they were worthless? How will resources come to be used more efficiently if those who care about the issue destroy their own potential by donating their most valuable assets to the world at large in the form of the very things which the world supposedly blithely squanders?