Your existence is informative

Cross posted from Overcoming Bias. Comments there.

***

Warning: this post is technical.

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on a given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q. So far you have not taken into account your observation that the planet you are on has life. How do you update on this evidence, to get a posterior probability for Q? Since you don’t know which is ‘this’ planet, with respect to the model, you can’t update directly on ‘there is life on this planet’, by excluding worlds where this planet doesn’t have life. And you can’t necessarily treat ‘this’ as an arbitrary planet, since you wouldn’t have seen it if it didn’t have life.

I have an ongoing disagreement with an associate who suggests that you should take ‘this planet has life’ into account by conditioning on ‘there exists a planet with life’. That is,

P(Q|there is life on this planet) = P(Q|there exists a planet with life).

Here I shall explain my disagreement.

Nick Bostrom argues persuasively that much science would be impossible if we treated ‘I observe X’ as ‘someone observes X’. This is basically because in a big world of scientists making measurements, at some point somebody will make most mistaken measurements. So if all you know when you measure the temperature of a solution to be 15 degrees is that you are not in a world where nobody ever measures its temperature to be 15 degrees, this doesn’t tell you much about the temperature.

You can add other apparently irrelevant observations you make at the same time – e.g. that the table is blue chipboard – in order to make your total observations less likely to arise once in a given world (at its limit, this is the suggestion of FNC). However it seems implausible that you should make different inferences from taking a measurement when you can also see a detailed but irrelevant picture at the same time than those you make with limited sensory input. Also the same problem re-emerges if the universe is supposed to be larger. Given that the universe is thought to be very, very large, this is a problem. Not to mention, it seems implausible that the size of the universe should greatly affect probabilistic judgements made about entities which are close to independent from most of the universe.

So I think Bostrom’s case is good. However I’m not completely comfortable arguing from the acceptability of something that we do (science) back to the truth of the principles that justify it. So I’d like to make another case against taking ‘this planet has life’ as equivalent evidence to ‘there exists a planet with life’.

Evidence is what excludes possibilities. Seeing the sun shining is evidence against rain, because it excludes the possible worlds where the sky is grey, which include most of those where it is raining. Seeing a picture of the sun shining is not much evidence against rain, because it excludes worlds where you don’t see such a picture, which are about as likely to be rainy or sunny as those that remain are.

Receiving the evidence ‘there exists a planet with life’ means excluding all worlds where all planets are lifeless, and not excluding any other worlds. At first glance, this must be different from ‘this planet has life’. Take any possible world where some other planet has life, and this planet has no life. ‘There exists a planet with life’ doesn’t exclude that world, while ‘this planet has life’ does. Therefore they are different evidence.

At this point however, note that the planets in the model have no distinguishing characteristics. How do we even decide which planet is ‘this planet’ in another possible world? There needs to be some kind of mapping between planets in each world, saying which planet in world A corresponds to which planet in world B, etc. As far as I can tell, any mapping will do, as long as a given planet in one possible world maps to at most one planet in another possible world. This mapping is basically a definition choice.

So suppose we use a mapping where in every possible world where at least one planet has life, ‘this planet’ corresponds to one of the planets that has life. See the below image.

Which planet is which?

Squares are possible worlds, each with two planets. Pink planets have life, blue do not. Define ‘this planet’ as the circled one in each case. Learning that there is life on this planet is equal to learning that there is life on some planet.

Now learning that there exists a planet with life is the same as learning that this planet has life. Both exclude the far righthand possible world, and none of the other possible worlds. What’s more, since we can change the probability distribution we end up with, just by redefining which planets are ‘the same planet’ across worlds, indexical evidence such as ‘this planet has life’ must be horseshit.

Actually the last paragraph was false. If in every possible world which contains life, you pick one of the planets with life to be ‘this planet’, you can no longer know whether you are in ‘this planet’. From your observations alone, you could be on the other planet, which only has life when both planets do. The one that is not circled in each of the above worlds. Whichever planet you are on, you know that there exists a planet with life. But because there’s some probability of you being on the planet which only rarely has life, you have more information than that. Redefining which planet was which didn’t change that.

Perhaps a different definition of ‘this planet’ would get what my associate wants? The problem with the last was that it no longer necessarily included the planet we are on. So what about we define ‘this planet’ to be the one you are on, plus a life-containing planet in all of the other possible worlds that contain at least one life-containing planet. A strange, half-indexical definition, but why not? One thing remains to be specified – which is ‘this’ planet when you don’t exist? Let’s say it is chosen randomly.

Now is learning that ‘this planet’ has life any different from learning that some planet has life? Yes. Now again there are cases where some planet has life, but it’s not the one you are on. This is because the definition only picks out planets with life across other possible worlds, not this one. In this one, ‘this planet’ refers to the one you are on. If you don’t exist, this planet may not have life. Even if there are other planets that do. So again, ‘this planet has life’ gives more information than ‘there exists a planet with life’.

You either have to accept that someone else might exist when you do not, or you have to define ‘yourself’ as something that always exists, in which case you no longer know whether you are ‘yourself’. Either way, changing definitions doesn’t change the evidence. Observing that you are alive tells you more than learning that ‘someone is alive’.

Resolving Paradoxes of Intuition

Cross posted from Overcoming Bias. Comments there.

***

Shelly Kagan gave a nice summary of some problems involved in working out whether death is bad for one. I agree with Robin’s response, and have postedbefore about some of the particular issues. Now I’d like to make a more general observation.

First I’ll summarize Kagan’s story. The problems are something like this. It seems like death is pretty bad. Thought experiments suggest that it is bad for the person who dies, not just their friends, and that it is bad even if it is painless. Yet if a person doesn’t exist, how can things be bad for them? Seemingly because they are missing out on good things, rather than because they are suffering anything. But it is hard to say when they bear the cost of missing out, and it seems like things that happen happen at certain times. Or maybe they don’t. But then we’d have to say all the people who don’t exist are missing out, and that would mean a huge tragedy is happening as long as those people go unconceived. We don’t think a huge tragedy is happening, so lets say it isn’t. Also we don’t feel too bad about people not being born earlier, like we do about them dying sooner. How can we distinguish these cases of deprivation from non-existence from the deprivation that happens after death? Not in any satisfactorily non-arbitrary way. So ‘puzzles still remain’.

This follows a pattern common to other philosophical puzzles. Intuitions say X sometimes, and not X other times. But they also claim that one should not care about any of the distinctions that can reasonably be made between the times when they say X is true and the times when they say X is false.

Intuitions say you should save a child dying in front of you. Intuitions say you aren’t obliged to go out of your way to protect a dying child in Africa. Intuitions also say physical proximity, likelihood of being blamed, etc shouldn’t be morally relevant.

Intuitions say you are the same person today as tomorrow. Intuitions say you are not the same person as Napoleon. Intuitions also say that whether you are the same person or not shouldn’t depend on any particular bit of wiring in your head, and that changing a bit of wiring doesn’t make you slightly less you.

Of course not everyone shares all of these intuitions (I don’t). But for those who do, there are problems. These problems can be responded to by trying to think of other distinctions between contexts that do seem intuitively legitimate, reframing an unintuitive conclusion to make it intuitive, or just accepting at least one of the unintuitive conclusions.

The first two solutions – finding more appealing distinctions and framings – seem a lot more popular than the third – biting a bullet. Kagan concludes that ‘puzzles remain’, as if this inconsistency is an apparent mathematical conflict that one can fully expect to eventually see through if we think about it right. And many other people have been working on finding a way to make these intuitions consistent for a while. Yet why expect to find a resolution?

Why not expect this contradiction to be like the one that arises if you claim that you like apples more than pears and also pears more than apples? There is no nuanced way to resolve the issue, except to give up at least one.  You can make up values, but sometimes they are just inconsistent. The same goes for evolved values.

From Kagan’s account of death, it seems likely that our intuitions are just inconsistent. Given natural selection, this is not particularly surprising. It’s no mystery how people could evolve to care about the survival of they and their associates, yet not to care about people who don’t exist. Even if people who don’t exist suffer the same costs from not existing. It’s also not surprising that people would come to believe their care for others is largely about the others’ wellbeing, not their own interests, and so believe that if they don’t care about a tragedy, there isn’t one. There might be some other resolution in the death case, but until we see one, it seems odd to expect one. Especially when we have already looked so hard.

Most likely, if you want a consistent position you will have to bite a bullet. If you are interested in reality, biting a bullet here shouldn’t be a last resort after searching every nook and cranny for a consistent and intuitive position. It is much more likely that humans have inconsistent intuitions about the value of life than that we have so far failed to notice some incredibly important and intuitive distinction in circumstances that drives our different intuitions. Why do people continue to search for intuitive resolutions to such problems? It could be that accepting an unintuitive position is easy, unsophisticated, unappealing to funders and friends, and seems like giving up. Is there something else I’m missing?

Moving blogs

Today Overcoming Bias becomes a group blog again, and I become one of the group. Robin will keep blogging, joined by Robert Wiblin and me. The other two are my good friends, and among my most respected intellectual influences, so it should be fun! We also hope that between us we can better produce regular enough output to make it worth your while visiting, without taking too much time away from other projects that we all have.

I might cross post my posts back here, for the sake of completeness. However it probably won’t be timely, and I might turn off the comments to keep the conversation in one place. So update your bookmarks/RSS/etc!

Do strange scenarios help us ask why not?

People are working on making robot cars communicate, with pedestrians for instance.

Notice that the apparent benefit of having cars communicate with pedestrians doesn’t actually have much to do with robots driving the cars. If having cars signal to pedestrians is useful, probably so is having drivers signal to pedestrians. Yet current cars and driving norms hardly provide for this at all. Many a time I have thought about this when trying to cross a road when there is a car coming toward me that seems to be slowing down, kind of, and whose windscreen I can’t really see through. Is the driver waving to me? Eating a sandwich? Hard to tell, so I won’t take my chances. Ah, now he’s stopped. And he’s annoyed. Or swatting a fly. Does that mean he’s about to go? Hard to tell, maybe I’ll just wait a sec to be sure. Now he’s really annoyed – annoyed enough to give up and drive on?… If only there were some little signal that meant ‘while this signal is on, I see you and am stopping for you’.

This is not my real point, but an example. Thinking about a strange future of robot cars causes us to make predictions and envision potentially valuable additions to it that have little to do with robot cars. Similarly, thinking about future AI development causes people to wonder if sudden leaps in technological capacity could cause a small portion of humanity to get far ahead of the rest, or if human values might be lost in the long run. These issues are not specific to AI. Yet when we look at the world around us we seem less likely to see ways to improve it, or to wonder why no groups of humans do get ahead of the rest technologically, or even notice that technological changes tend to be relatively small, or to ask what is becoming of our values.

In general it seems that thinking about strange scenarios causes people expect things to happen which have little to do with the scenarios. Since they have little to do with the scenarios, it makes sense to ask why they haven’t already happened, or whether we could already benefit from them.

Some men see things as they are and say, why? I dream of things the way they never were and say, why not?

– Robert F. Kennedy, after George Bernard Shaw

Dreaming of the way things never were seems more impressive, difficult, and useful. Perhaps thinking of strange scenarios is one way to do it more easily.

Podcasts with Robin Hanson 3

Robin Hanson and I just recorded two more podcasts, on:

We’ve recorded four podcasts before, on SignalingIdealismSchool, and Future.