Might law save us from uncaring AI?

Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.

When is it efficient to kill humans?

At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?

What does law do?

In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law  basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong.

This looks like it should work as long as a majority of the huge group, weighted by power, don’t change their mind about their commitment to the law enforcing majority and decide, for instance, to crush the puny humans. Roughly the same issue exists now – a majority could decide to dominate a smaller group and take their property – but at the moment this doesn’t happen much. The interesting question is whether the factors that keep this stable at the moment will continue into super-robot times. If these factors are generic to the institution or to agents, we are probably safe. If they are to do with human values, such as empathy or conformism, or some other traits super-intelligent AIs won’t necessarily inherit in human quantities, then we aren’t necessarily safe.

For convenience, when I write ‘law’ here I mean law that is aimed at fulfilling the above purpose: stopping people dominating one another. I realize the law only roughly does this, and can be horribly corrupted.  If politicians were to change a law to make it permissible to steal from Lebanese people, this would class as ‘breaking the law’ in the current terminology, and also as attempting to defect from the law enforcing majority with a new powerful group.

How does the law retain control?


Two non-human-specific reasons

A big reason law is stable now is because anyone who wants to renege their commitment to the enforcing majority can expect punishment, unless they somehow coordinate with a majority to defect to the same new group at the same time and take power. That’s hard to do; you have to spread the intention to enough people to persuade a majority, without evidence reaching anyone who will call upon the rest to punish you for your treachery.  Those you seek to persuade risk punishment too, even just for not dobbing you in, so have good reason to ignore or expose you. And the bigger group of co-conspirators you assemble, the more likely you are to be noticed, and the more seriously will the majority think you need punishing. So basically it’s hard to coordinate a majority of those who punish defection to defect from that group. This doesn’t seem human specific.

Then there is the issue of whether most agents would benefit from coordinating to dominate a group, if they could easily for some reason. At first glance it looks like yes. If they had the no-risk option of keeping the status quo or joining a successful majority bid to steal everything from a much less powerful minority, they would benefit. But then they would belong to a smaller group that had a precedent of successfully pillaging minorities. That would make it easier for anyone to coordinate to do something similar in future, as everyone’s expectations of others joining the dominating group would be higher, so people would have more reason to join themselves. After one such successful event, you should heighten your expect more all things equal. That means many people who could join an initial majority should expect a significant chance of being in the next targeted minority once this begins, which decreases the benefit of doing so. It doesn’t matter if a good proportion of agents could be reasonably confident they will remain in the dominant group – this just makes it harder to find a majority to coordinate defection amongst. While it’s not clear whether humans think this out in much detail often, people generally believe that if law ‘breaks down’ all hell will break loose, which is basically the same idea. This legitimate anticipation of further losses for many upon abandonment of the law also doesn’t seem specific to humans.

Two possibly human-specific reasons

Are there further human specific reasons our system is fairly stable? One potential contributor is that humans are compassionate toward one another. Given the choice of the status quo, and safely stealing everything from a powerless minority with the guarantee that everything will go back to normal afterwards, I suspect most people would take the non-evil option. So the fact that society doesn’t conspire against the elderly or left handed people is poor evidence that it is the law that should be credited – such examples don’t distinguish law from empathy, or even from trying to look good. How can we know how much stability the non human dependent factors offer when our only experimental subjects are bound by things like empathy? We could look at people who aren’t empathetic, such as sociopaths, but they presumably won’t currently act as they would in a society of sociopaths, knowing that everyone else is more empathetic. On an individual level they are almost as safe to those around them as other people, though they do tend to cheat more. This may not matter – most of our rules are made for people who are basically nice to each other automatically, so those who aren’t can cheat but it’s not worth us putting up more guards because such people are rare. Where there is more chance of many people cheating, there’s nothing to stop more stringent rules and monitoring. The main issue is whether less empathetic creatures would find it much easier to overcome the first issue listed above, and organize a majority to ignore the law for a certain minority. I expect it would a bit – moral  repulsion is part of what puts people off joining such campaigns – but I can’t see how it would account for much of the dynamic.

A better case study to find how much empathy matters is the treatment of those who most people wouldn’t mind stealing from, were it feasible. These include criminals, certain foreigners at certain times, hated ethnic and religious minorities and dead people. Here it is less clear – in some cases they are certainly treated terribly relative to others, and their property is taken either by force or more subtly. But are they treated as badly as they would be without law? I think they are usually treated much better than this, but I haven’t looked into it extensively. It’s not hard to subtly steal from minorities under the guise of justice, so there are sure to be losses, but giving up that guise all together is harder.

Perhaps people don’t sign up to loot the powerless just because they are conformist? Conformity presumably helps maintain any status quo, but it may also mean that once the loyalties of enough people have shifted, the rest follow faster than they otherwise would. This would probably overall hinder groups taking power. It’s not obvious whether advanced robots would be more or less conformist though. As the judgements of other agents improve in quality, there is probably more reason to copy them, all things equal.

This is probably a non-exhaustive list of factors that make us susceptible to peaceful law-governed existence – perhaps you can think of more.

Really powerful robots

If you are more powerful than the rest of the world combined, you need not worry about the law. If you want to seize power, you already have your majority. There is presumably some continuum between this situation and one of agents with perfectly equal power. For instance if three agents together have as much power as everyone else combined, they will have less trouble organizing a take over than millions of people will. So it might look like very powerful AI will move us toward a less stable situation because of the increased power differential between them and us. But that’s the wrong generalization. If the group of very powerful creatures becomes bigger, the dynamic shouldn’t be much different to where there were many agents of similar power. By the earlier reasoning, it shouldn’t matter that many people are powerless if there are enough powerful people upholding the law that it’s hard to organize a movement among them to undermine it. The main worry in such a situation might be the creation of an obvious dividing line in society, as I will explain next.

Schelling points in social dividedness

Factors that decrease the strength of the common assumption that most of the power will join on the law side will be dangerous. A drive by someone in the majority to defeat the minority may be expected to get the sympathies of his own side more in a society that is cleanly divided than in a society with messier or vaguer divisions. People will be less hesitant to join in conquering the others if they expect everyone else to do that, because the potential for punishment is lessened or reversed. Basically if you can coordinate by shared expectations you needn’t coordinate in a more punishable fashion.

Even the second non-human factor above would be lessened; if the minority being looted is divided from the rest according to an obvious enough boundary, similar events are less likely to follow if there are no other such clear divisions – it may look like a special case, a good stopping point on a slippery slope. Nobody should assume that next time would happen as easily.

For both these reasons then, it’s important to avoid any Schelling point for where the law should cease to apply. For this reason humans might be better off if there is a vast range of different robots and robot-human amalgams – it obscures the line. Small scale beneficial relationships such as trade or friendship should similarly make it less obvious who would side with who. On the other hand it should be terrible to design a clear division into the system from the start, such as for different laws to apply to the different groups, or for them to use different systems all together, like foreign nations. Making sure these things are in our favor from the start is feasible and should protect us as well as we know how.

9 responses to “Might law save us from uncaring AI?

  1. To your ‘two non-human-specific reasons’ I would add that a breakdown of law and order makes an economy much less productive (you allude to that but don’t say it directly). If the mafia took over Australia and just wanted to maximise what they could cream off the top, they would probably want to keep the law functioning well in general rather than just steal indiscriminately, even if they cared about no one else.

    If there are many strong AIs any group of them might decide they can achieve their goals better by going with current law (or improving it through normal channels) than by ruining the system by which we coordinate together. That wouldn’t be true for all AI values of course.

  2. Pingback: Existential risk links « Robert Wiblin

  3. A few points:

    The Latin American experience supports the value of blurred group boundaries in reducing ethnic conflict.

    Liberal democracies don’t have ‘law’ in the sense you’re describing, law which would prohibit attempts to democratically change it (despite a few things on the edges like hate speech laws).

    Most countries have undergone wars, revolutions, constitutional conventions, and other processes that have disrupted existing legal institutions and left coalition politics to determine the new legal order. Disruptions are much worse news if your survival hinges on legacy rights.

    Large differences in intelligence (and thus ability to analyze others’ source code or other signals about motivations without being deceived). If superintelligences can make credible promises to each other (backed by source code verification or joint construction of surrogate AIs) but not to humans (incapable of verifying the motivations of an AI even from code designed to appear transparent) that’s a basis for a ‘one-time expropriation’ (with the new order backed by the AI-only cooperation technologies).

    • In the US, there are a great many government policies which Congress could change, yet it changes only rarely, exactly because folks fear “opening a can of worms.” Politicians do explicitly fear that by opening up some topic for change by making an initial credible proposal, others will jump in and make amendments, and who knows where that all will lead.

  4. Pingback: Tweets that mention Might law save us from uncaring AI? « Meteuphoric -- Topsy.com

  5. michael vassar

    The powerless generally have nothing to loot. When, as in the case of Native Americans historically, they have something valued by others, they are looted.

  6. Even with a strong legal system, it is not too hard today to kill a person. Obviously, with good law enforcement, it would rarely be rational to commit murder. However, because of the sheer number of people in places with good law enforcement, there are bound to be a few irrational people who do kill others.

    So how does this apply to superintelligent AI’s? Even if there are many AI’s who limit each other’s actions, if killing all humans is as easy for an AI as killing a human is for a normal person, humans might all die simply because one irrational AI decided they wanted that to happen. In this case, having more AI’s would actually make human extinction more likely!

    Of course, this isn’t likely to happen–while an individual human can kill the ant colony in their backyard, they can’t cause the entire species of ants to become extinct. Still, even if an individual AI could only destroy a city, many humans could still die from AI attacks.

  7. How about we just don’t give AIs guns? Government laws are backed up by the capacity for violence, and without that, AIs couldn’t make binding laws.

  8. That .AI will be dangerously unfriendly is treated as a near certainty on LW. However, the argument for it is highly conjunctive, and there are aalternatives at every stage. One of the alternatives, regularly dismissed without really being argued against, is the one outlined above, that a society of rational agents would converge on the need for some kind of law/ethics without having to grok every wrinkle of human values.

Comment!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.