Category Archives: 1

SIA doomsday: The filter is ahead

The great filter, as described by Robin Hanson:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?

I will argue that we are not far along at all. Even if the steps of the filter we have already passed look about as hard as those ahead of us, most of the filter is probably ahead. Our bright future is an illusion; we await filtering. This is the implication of applying the self indication assumption (SIA) to the great filter scenario, so before I explain the argument, let me briefly explain SIA.

SIA says that if you are wondering which world you are in, rather than just wondering which world exists, you should update on your own existence by weighting possible worlds as more likely the more observers they contain. For instance if you were born of an experiment where the flip of a fair coin determined whether one (tails) or two (heads) people were created, and all you know is that and that you exist, SIA says heads was twice as likely as tails. This is contentious; many people think in such a situation you should think heads and tails equally likely. A popular result of SIA is that it perfectly protects us from the doomsday argument. So now I’ll show you we are doomed anyway with SIA.

Consider the diagrams below. The first one is just an example with one possible world so you can see clearly what all the boxes mean in the second diagram which compares worlds. In a possible world there are three planets and three stages of life. Each planet starts at the bottom and moves up, usually until it reaches the filter. This is where most of the planets become dead, signified by grey boxes. In the example diagram the filter is after our stage. The small number of planets and stages and the concentration of the filter is for simplicity; in reality the filter needn’t be only one unlikely step, and there are many planets and many phases of existence between dead matter and galaxy colonizing civilization. None of these things are important to this argument.

.

Diagram key

.

The second diagram shows three possible worlds where the filter is in different places. In every case one planet reaches the last stage in this model – this is to signify a small chance of reaching the last step, because we don’t see anyone out there, but have no reason to think it impossible. In the diagram, we are in the middle stage, earthbound technological civilization say. Assume the various places we think the filter could be are equally likely..

SIA doom

.

This is how to reason about your location using SIA:

  1. The three worlds begin equally likely.
  2. Update on your own existence using SIA by multiplying the likelihood of worlds by their their population. Now the likelihood ratio of the worlds is 3:5:7
  3. Update on knowing you are in the middle stage. New likelihood ratio: 1:1:3. Of course if we began with an accurate number of planets in each possible world, the 3 would be humungous and we would be much more likely in an unfiltered world.

Therefore we are much more likely to be in worlds where the filter is ahead than behind.

—-

Added: I wrote a thesis on this too.


Addiction

Imagine a strange genie offers you the opportunity to no longer crave food, drink, sleep, warmth or sex. Are you be interested? I wouldn’t be – getting things I need and crave seems to be more fun on the whole than getting things I just casually enjoy. There’s a pretty steep diminishing return to the things I listed though, and like many people I have about as much as I want of most of them. So how to make my life much better?

An obvious strategy is to acquire more such strong desires. This is an unpopular path though. If you want to enjoy life more it is common to look for new casually pleasurable activities – make new friends, try a new sport, take your partner on an unusual romantic excursion. If you become too attached to an activity, you are deemed ‘addicted’ though, which is a bad thing. There are activities that are known to make people particularly addicted, and beginning them is seen as a stupid move. But why is addiction so bad?

There is an argument that addictions don’t make you happy – they entrap you in a cycle of endless obsession with no real satisfaction. Remember enjoyment and desire don’t always coincide. But while that’s surely true for some things people become addicted to, if addiction merely entails strong ongoing craving, why should the subjects of such cravings tend to be unpleasant more so than less serious desires?

For these reasons I recently set out to acquire an addiction to coffee, a not particularly dangerous drug. It’s not very strong yet; I do alright without coffee for long periods, but feel relieved and invigorated for having it. So far this seems a great benefit, compared both to casually and unaffectedly drinking coffee and to not drinking coffee at all. However if I say to people that I’m somewhat addicted to coffee they usually express pity. If I say I think it’s a fine arrangement they seem to think I’m silly.

Why is addiction bad, beyond the few specific addictions that aren’t pleasurable or to lead to externalities such as weaponed thievery?

Trade makes you responsible, but why?

It’s supposedly bad to exploit poor people by paying them as little as you can get away with in trade.

Onlookers who don’t offer the needy anything condemn those who offer some non altruistic benefit through trade because it isn’t enough. It’s interesting that we see this as a fault with the person trading, rather than with everyone. Very few people think they themselves are morally obliged to pay the poor more. I’ve discussed before how misguided this is if we care about the wellbeing of the poor person. But why do people feel this way? Here are some reasons I’ve occasionally heard, though I doubt they are all independently responsible for this curiosity:

  1. The issue is domination more than wellbeing. A trader forces a poor person into a low value deal by offering when they can’t afford to refuse. At least the rest of us respect poor people enough to mind our own businesses.
  2. It is the role of the trader to trade fairly with the poor people. It is the role of the casual observer to have opinions, not to intervene in traders’ doings.
  3. Benefiting from another’s misfortune is evil, even if it helps them, so interactions with people who need help should be charitable. It’s not great of most people to ignore poor people, but it’s horrific to go out and benefit from their hardships.
  4. Trade is a form of social relation, and people should be nice in their relationships much more than they should be to those they are unconnected to.

Divide individuals for utilitarian libertarianism

or Why I could conceivably support banning smoking part 2 (at the request of Robert Wiblin, who would not support banning smoking)

A strong argument for individuals having complete freedom in decisions affecting nobody else is that each person has much better information about what they want and the details of their situation than anyone else does or could. For example it is often argued that people should choose for themselves how much fat to eat without government intervention, as they have intimate knowledge of how much they like eating fat and how much they dislike being fat, and what degree of mockery their social scene will administer and so forth. Not only that, but they have a much stronger incentive to get the decision right than anyone else.

A counterargument often made here is that people are just so irrational that they don’t know what’s good for them. Sometimes it’s not clear how anyone else would do better, being people themselves, and people in complicated organizations full of other motives no less. Sometimes it’s not clear whether people are actually that irrational in real life, or if they manage to compensate.

However one situation where it seems quite likely that other people would be better informed on your preferences and how an outcome will affect you is when you are making decisions that will affect you far in the future.  The average seventy five year old probably has more in common with the next average seventy five year old than they have in common with their twenty five year old selves, at least in some relevant respects. The stranger people are the less true this is presumably, but most people are not strange.  So for instance a bunch of old people dying of lung cancer have a much better idea of how much you would like lung cancer than you do when you are weighing it up in the decision to smoke or not much earlier in life.

This might not matter if people care a lot about their far future selves, as they can of course seek out people to ask about how horrible or great which experiences are. However even then they are doing no better than anyone else who does that, so there is no argument to be made that they have much more intimate knowledge of their own preferences and situation.

You could still argue that I have much more of an interest than anyone else in my own future, if only a slight one compared to how much my future self cares about herself. But I also have a lot to gain by exploiting her and discounting her feelings, so it’s not clear at all from a utilitarian perspective that I should be free to make decisions that only affect myself, but far into the future.

The simple way to make this argument is to say that the ‘individual’ is temporally too big a unit to be best ruled over by one part in a (temporal) position of power. The relevant properties of the right sized unit, as far as the usual arguments for libertarianism are concerned, are lots of information and shared care, and according to these a far future self is drifting toward being a different person. You shouldn’t be allowed to externalize onto them as much as you like for the same reasons that go for anyone else.

Fictitious sentiments

Music often fills me with the feeling that I care about certain things. Idealistic songs make me feel like I will go out and support some cause or another. Romantic songs make me feel that I would do just about anything for someone. Other songs make me feel passionately motivated to go and do amazing things. Some songs probably even make me feel patriotic, though I only infer that from the completely unfamiliar feeling that sometimes accompanies them.

The plausibility of these feelings is really diminished by the music though. If I really cared so much about some cause or person, I would go and pursue the cause or do whatever the person wanted me to, not lie around on my bed relishing the emotional high of feeling like I wanted to. The same goes for movies. The fact that you are sitting there cheering on the good guy, not out in the world doing something good, shows that you don’t really support his principles. Unless the movie is about some guy who gallantly cheers on worthy characters in movies.