Might law save us from uncaring AI?

Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.

When is it efficient to kill humans?

At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?

What does law do?

In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law  basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong. Continue reading

When is forced organ selfishness good for you?

Simon Rippon claims having a market for organs might harm society:

It might first be thought that it can never be a good thing for you to have fewer rather than more options. But I believe that this attitude is mistaken on a number of grounds. For one, consider that others hold you accountable for not making the choices that are necessary in order to fulfil your obligations. As things stand, even if you had no possessions to sell and could not find a job, nobody could criticize you for failing to sell an organ to meet your rent. If a free market in body parts were permitted and became widespread, they would become economic resources like any other, in the context of the market. Selling your organs would become something that is simply expected of you when the financial need arises. A new “option” can thus easily be transformed into an obligation, and it can drastically change the attitudes that it is appropriate for others to adopt towards you in a particular context.

He’s right that at that moment where you would normally throw your hands in the air and move on, you are worse off if an organ market gives you the option of paying more debts before declaring your bankruptcy. But this is true for anything you can sell. Do we happen to have just the right number of salable possessions? By Simon’s argument people should benefit from bans on selling all sorts of things. For instance labor. People (the poor especially) are constantly forced to sell their time – such an integral part of their selves – to pay rent, and other debts they had no choice but to induce. If only they were protected from this huge obligation that we laughably call an ‘option’. Such a ban might be costly for the landlord, but it would be good for the poor people, right? No! The landlords would react and not rent to them.

So why shouldn’t we expect the opposite effect if people are allowed to sell more of their possessions? People who currently don’t have the assets or secure income to be trusted with loans or ongoing rental payments might be legitimately offered such things if they had another asset to sell. Think of all the people who would benefit from being able to mortgage their kidney to buy a car instead of riding to some closer job while they gradually save up.

In general when negotiating, it’s best to not have options that are worse for you. When the time comes to carry out your side of a deal, it’s true this means being forced to renege. But when making the deal beforehand, you do better to have the option of carrying out your part later, so that the other person does their part. And in a many shot game, you do best to be able to do your part the whole time, so the trading (which is better than not trading) continues.

How does information affect hookups?

With social networking sites enabling the romantically inclined to find out more about a potential lover before the first superficial chat than they previously would have in the first month of dating, this is an important question for the future of romance.

Lets assume that in looking for partners, people care somewhat about rank and and somewhat about match. That is, they want someone ‘good enough’ for them who also has interests and personality that they like.

First look at the rank component alone. Assume for a moment that people are happy to date anyone they believe is equal to or better than them in desirability. Then if everyone has a unique rank and perfect information, there will never be any dating at all. The less information they have the more errors in comparing, so the more chance that A will think B is above her while B thinks A is above him. Even if people are willing to date people somewhat less desirable than they, the same holds – by making more errors you trade wanting more desirable people for wanting less desirable people, who are more likely to want you back , even if they are making their own errors. So to the extent that people care about rank, more information means fewer hookups.

How about match then? Here it matters exactly what people want in a match. If they mostly care about their beloved having certain characteristics,  more information will let everyone hear about more people who meet their requirements. On the other hand if we mainly want to avoid people with certain characteristics, more information will strike more people off the list. We might also care about an overall average desirability of characteristics – then more information is as likely to help or harm assuming the average person is averagely desirable. Or perhaps we want some minimal level of commonality, in which case more information is always a good thing – it wouldn’t matter if you find out she is a cannibalistic alcoholic prostitute, as long as eventually you discover those board games you both like. There are more possibilities.

You may argue that you will get all the information you want in the end, the question is only speed – the hookups prevented by everyone knowing more initially are those that would have failed later anyway. However flaws that dissuade you from approaching one person with a barge pole are often ‘endearing’ when you discover them too late, and once they are in place loving delusions can hide or remove attention from more flaws, so the rate of information discovery matters. To the extent we care about rank then, more information should mean fewer relationships. To the extent we care about match, it’s unclear without knowing more about what we want.

SIA on other minds

Another interesting implication if the self indication assumption (SIA) is right is that solipsism is much less likely correct than you previously thought, and relatedly the problem of other minds is less problematic.

Solipsists think they are unjustified in believing in a world external to their minds, as one only ever knows one’s own mind and there is no obvious reason the patterns in it should be driven by something else (curiously, holding such a position does not entirely dissuade people from trying to convince others of it). This can then be debated on grounds of whether a single mind imagining the world is more or less complex than a world causing such a mind to imagine a world.

The problem of other minds is that even if you believe in the outside world that you can see, you can’t see other minds. Most of the evidence for them is by analogy to yourself, which is only one ambiguous data point (should I infer that all humans are probably conscious? All things? All girls? All rooms at night time?).

SIA says many minds are more likely than one, given that you exist. Imagine you are wondering whether this is World 1, with a single mind among billions of zombies, or World 2, with billions of conscious minds. If you start off roughly uncertain, updating on your own conscious existence with SIA shifts the probability of world 2 to billions of times the probability of world 1.

Similarly for solipsism. Other minds probably exist. From this you may conclude the world around them does too, or just that your vat isn’t the only one.

SIA doomsday: The filter is ahead

The great filter, as described by Robin Hanson:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?

I will argue that we are not far along at all. Even if the steps of the filter we have already passed look about as hard as those ahead of us, most of the filter is probably ahead. Our bright future is an illusion; we await filtering. This is the implication of applying the self indication assumption (SIA) to the great filter scenario, so before I explain the argument, let me briefly explain SIA.

SIA says that if you are wondering which world you are in, rather than just wondering which world exists, you should update on your own existence by weighting possible worlds as more likely the more observers they contain. For instance if you were born of an experiment where the flip of a fair coin determined whether one (tails) or two (heads) people were created, and all you know is that and that you exist, SIA says heads was twice as likely as tails. This is contentious; many people think in such a situation you should think heads and tails equally likely. A popular result of SIA is that it perfectly protects us from the doomsday argument. So now I’ll show you we are doomed anyway with SIA.

Consider the diagrams below. The first one is just an example with one possible world so you can see clearly what all the boxes mean in the second diagram which compares worlds. In a possible world there are three planets and three stages of life. Each planet starts at the bottom and moves up, usually until it reaches the filter. This is where most of the planets become dead, signified by grey boxes. In the example diagram the filter is after our stage. The small number of planets and stages and the concentration of the filter is for simplicity; in reality the filter needn’t be only one unlikely step, and there are many planets and many phases of existence between dead matter and galaxy colonizing civilization. None of these things are important to this argument.

.

Diagram key

.

The second diagram shows three possible worlds where the filter is in different places. In every case one planet reaches the last stage in this model – this is to signify a small chance of reaching the last step, because we don’t see anyone out there, but have no reason to think it impossible. In the diagram, we are in the middle stage, earthbound technological civilization say. Assume the various places we think the filter could be are equally likely..

SIA doom

.

This is how to reason about your location using SIA:

  1. The three worlds begin equally likely.
  2. Update on your own existence using SIA by multiplying the likelihood of worlds by their their population. Now the likelihood ratio of the worlds is 3:5:7
  3. Update on knowing you are in the middle stage. New likelihood ratio: 1:1:3. Of course if we began with an accurate number of planets in each possible world, the 3 would be humungous and we would be much more likely in an unfiltered world.

Therefore we are much more likely to be in worlds where the filter is ahead than behind.

—-

Added: I wrote a thesis on this too.