Tag Archives: existential risk

Economic growth and parallelization of work

Eliezer suggests that increased economic growth is likely bad for the world, as it should speed up AI progress relative to work on AI safety. He reasons that this should happen because safety work is probably more dependent on insights building upon one another than AI work is in general. Thus work on safety should parallelize less well than work on AI, so should be at a disadvantage in a faster paced economy. Also, unfriendly AI should benefit more from brute computational power than friendly AI. He explains,

“Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. “…

“Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.”

I’m sympathetic to otherscriticisms of this argument, but would like to point out a more basic problem, granting all other assumptions. As far as I can tell, the effect of economic growth on parallelization should go the other way. Economic progress should make work in a given area less parallel, relatively helping those projects that do not parallelize well.

Economic growth, without substantial population growth, means that each person is doing more work in their life. This means the work that would have otherwise been done by a number of people can be done by a single person, in sequence. The number of AI researchers at a given time shouldn’t obviously change much if the economy overall is more productive. But each AI researcher will have effectively lived and worked for longer, before they are replaced by a different person starting off again ignorant. If you think research is better done by a small number of people working for a long time than a lot of people doing a little bit each, economic growth seems like a good thing.

On this view, economic growth is not like speeding up time – it is like speeding up how fast you can do things, which is like slowing down time. Robotic cars and more efficient coffee lids alike mean researchers (and everyone else) have more hours per day to do things other than navigate traffic and lid their coffee. I expect economic growth seems like speeding up time if you imagine it speeding up others’ abilities to do things and forget it also speeds up yours. Or alternatively if you think it speeds up some things everyone does, without speeding up some important things, such as people’s abilities to think and prepare. But that seems not obviously true, and would anyway be another argument.

SIA and the Two Dimensional Doomsday Argument

This post might be technical. Try reading this if I haven’t explained everything well enough.

When the Self Sampling Assumption (SSA) is applied to the Great Filter it gives something pretty similar to the Doomsday Argument, which is what it gives without any filter. SIA gets around the original Doomsday Argument. So why can’t it get around the Doomsday Argument in the Great Filter?

The Self Sampling Assumption (SSA) says you are more likely to be in possible worlds which contain larger ratios of people you might be vs.  people know you are not*.

If you have a silly hat, SSA says you are more likely to be in world 2 - assuming Worlds 1 and 2 are equally likely to exist (i.e. you haven't looked aside at your companions), and your reference class is people.

The Doomsday Argument uses the Self Sampling Assumption. Briefly, it argues that if there are many generations more humans, the ratio of people who might be you (are born at the same time as you) to people you can’t be (everyone else) will be smaller than it would be if there are few future generations of humans. Thus few generations is more likely than previously estimated.

An unusually large ratio of people in your situation can be achieved by a possible world having unusually few people unlike you in it or unusually many people like you, or any combination of these.

 

Fewer people who can't be me or more people who may be me make a possible world more likely according to SSA.

For instance on the horizontal dimension, you can compare a set of worlds which all have the same number of people like you, and different numbers of people you are not. The world with few people unlike you has the largest increase in probability.

 

Doomsday

The top row from the previous diagram. The Doomsday Argument uses possible worlds varying in this dimension only.

The Doomsday Argument is an instance of variation in the horizontal dimension only. In every world there is one person with your birth rank, but the numbers of people with future birth ranks differ.

At the other end of the spectrum you could be comparing  worlds with the same number of future people and vary the number of current people, as long as you are ignorant of how many current people there are.

The vertical axis. The number of people in your situation changes, while the number of others stays the same. The world with a lot of people like you gets the largest increase in probability.

This gives a sort of Doomsday Argument: the population will fall, most groups won’t survive.

The Self Indication Assumption (SIA) is equivalent to using SSA and then multiplying the results by the total population of people both like you and not.

In the horizontal dimension, SIA undoes the Doomsday Argument. SSA favours smaller total populations in this dimension, which are disfavoured to the same extent by SIA, perfectly cancelling.

[1/total] * total = 1
(in bold is SSA shift alone)

In vertical cases however, SIA actually makes the Doomsday Argument analogue stronger. The worlds favoured by SSA in this case are the larger ones, because they have more current people. These larger worlds are further favoured by SIA.

[(total – 1)/total]*total = total – 1

The second type of situation is relatively uncommon, because you will tend to know more about the current population than the future population. However cases in between the two extremes are not so rare. We are uncertain about creatures at about our level of technology on other planets for instance, and also uncertain about creatures at some future levels.

This means the Great Filter scenario I have written about is an in between scenario. Which is why the SIA shift doesn’t cancel the SSA Doomsday Argument there, but rather makes it stronger.

Expanded from p32 of my thesis.

——————————————-
*or observers you might be vs. those you are not for instance – the reference class may be anything, but that is unnecessarily complicated for the the point here.

SIA says AI is no big threat

Artificial Intelligence could explode in power and leave the direct control of humans in the next century or so. It may then move on to optimize the reachable universe to its goals. Some think this sequence of events likely.

If this occurred, it would constitute an instance of our star passing the entire Great Filter. If we should cause such an intelligence explosion then, we are the first civilization in roughly the past light cone to be in such a position. If anyone else had been in this position, our part of the universe would already be optimized, which it arguably doesn’t appear to be. This means that if there is a big (optimizing much of the reachable universe) AI explosion in our future, the entire strength of the Great Filter is in steps before us.

This means a big AI explosion is less likely after considering the strength of the Great Filter, and much less likely if one uses the Self Indication Assumption (SIA).

The large minimum total filter strength contained in the Great Filter is evidence for larger filters in the past and in the future. This means evidence against the big AI explosion scenario, which requires that the future filter is tiny.

SIA implies that we are unlikely to give rise to an intelligence explosion for similar reasons, but probably much more strongly. As I pointed out before, SIA says that future filters are much more likely to be large than small. This is easy to see in the case of AI explosions. Recall that SIA increases the chances  of hypotheses where there are more people in our present situation. If we precede an AI explosion, there is only one civilization in our situation, rather than potentially many if we do not. Thus the AI hypothesis is disfavored (by a factor the size of the extra filter it requires before us).

What the Self Sampling Assumption (SSA), an alternative principle to SIA, says depends on the reference class. If the reference class includes AIs, then we should strongly not anticipate such an AI explosion. If it does not, then we strongly should (by the doomsday argument). These are both basically due to the Doomsday Argument.

In summary, if you begin with some uncertainty about whether we precede an AI explosion, then updating on the observed large total filter and accepting SIA should make you much less confident in that outcome. The Great Filter and SIA don’t just mean that we are less likely to peacefully colonize space than we thought, they also mean we are less likely to horribly colonize it, via an unfriendly AI explosion.

Light cone eating AI explosions are not filters

Some existential risks can’t account for any of the Great Filter. Here are two categories of existential risks that are not filters:

Too big: any disaster that would destroy everyone in the observable universe at once, or destroy space itself, is out. If others had been filtered by such a disaster in the past, we wouldn’t be here either. This excludes events such as simulation shutdown and breakdown of a metastable vacuum state we are in.

Not the end: Humans could be destroyed without the causal path to space colonization being destroyed. Also much of human value could be destroyed without humans being destroyed. e.g. Super-intelligent AI would presumably be better at colonizing the stars than humans are. The same goes for transcending uploads. Repressive totalitarian states and long term erosion of value could destroy a lot of human value and still lead to interstellar colonization.

Since these risks are not filters, neither the knowledge that there is a large minimum total filter nor the use of SIA increases their likelihood.  SSA still increases their likelihood for the usual Doomsday Argument reasons. I think the rest of the risks listed in Nick Bostrom’s paper can be filters. According to SIA averting these filter existential risks should be prioritized more highly relative to averting non-filter existential risks such as those in this post. So for instance AI is less of a concern relative to other existential risks than otherwise estimated. SSA’s implications are less clear – the destruction of everything in the future is a pretty favorable inclusion in a hypothesis under SSA with a broad reference class, but as always everything depends on the reference class.

Anthropic principles agree on bigger future filters

I finished my honours thesis, so this blog is back on. The thesis is downloadable here.  I’ll blog some other interesting bits soon.

My main point was that two popular anthropic reasoning principles, the Self Indication Assumption (SIA) and the Self Sampling Assumption (SSA), as well as Full Non-indexical Conditioning (FNC)  basically agree that future filter steps will be larger than we otherwise think, including the many future filter steps that are existential risks.

Figure 1: SIA likes possible worlds with big populations at our stage, which means small past filters, which means big future filters.

SIA says the probability of being in a possible world is proportional to the number of people it contains who you could be. SSA says it’s proportional to the fraction of people (or some other reference class) it contains who you could be. FNC says the probability of being in a possible world is proportional to the chance of anyone in that world having exactly your experiences. That chance is more the larger the population of people like you in relevant ways, so FNC generally gets similar answers to SIA. For a lengthier account of all these, see here.

SIA increases expectations of larger future filter steps because it favours smaller past filter steps. Since there is a minimum total filter size, this means it favours big future steps. This I have explained before. See Figure 1. Radford Neal has demonstrated similar results with FNC.

Figure 2: A larger filter between future stages in our reference class makes the population at our own stage a larger proportion of the total population. This increases the probability under SSA.

SSA can give a variety of results according to reference class choice. Generally it directly increases expectations of both larger future filter steps and smaller past filter steps, but only for those steps between stages of development that are at least partially included in the reference class.

For instance if the reference class includes all human-like things, perhaps it stretches from ourselves to very similar future people who have avoided many existential risks. In this case, SSA increases the chances of large filter steps between these stages, but says little about filter steps before us, or after the future people in our reference class. This is basically the Doomsday Argument – larger filters in our future mean fewer future people relative to us. See Figure 2.

Figure 3: In the world with the larger early filter, the population at many stages including ours is smaller relative to some early stages. This makes the population at our stage a smaller proportion of the whole, which makes that world less likely. (The populations at each stage are a function of the population per relevant solar system as well as the chance of a solar system reaching that stage, which is not illustrated here).

With a reference class that stretches to creatures in filter stages back before us, SSA increases the chances of smaller past filters steps between those stages. This is because those filters make observers at almost all stages of development (including ours) less plentiful relative to at least one earlier stage of creatures in our reference class. This makes those at our own stage a smaller proportion of the population of the reference class. See Figure 3.

The predictions of the different principles differ in details such as the extent of the probability shift and the effect of timing. However it is not necessary to resolve anthropic disagreement to believe we have underestimated the chances of larger filters in our future. As long as we think something like one of the above three principles is likely to be correct, we should update our expectations already.

[Edited 2021 to change download options]