Are ethical asymmetries from property rights?

These are some intuitions people often have:

  • You are not required to save a random person, but you are definitely not allowed to kill one
  • You are not required to create a person, but you are definitely not allowed to kill one
  • You are not required to create a happy person, but you are definitely not allowed to create a miserable one
  • You are not required to help a random person who will be in a dire situation otherwise, but you are definitely not allowed to put someone in a dire situation
  • You are not required to save a person in front of a runaway train, but you are definitely not allowed to push someone in front of a train. By extension, you are not required to save five people in front of a runaway train, and if you have to push someone in front of the train to do it, then you are not allowed.

Here are some more:

  • You are not strongly required to give me your bread, but you are not allowed to take mine
  • You are not strongly required to lend me your car, but you are not allowed to unilaterally borrow mine
  • You are not strongly required to send me money, but you are not allowed to take mine

The former are ethical intuitions. The latter are implications of a basic system of property rights. Yet they seem very similar. The ethical intuitions seem to just be property rights as applied to lives and welfare. Your life is your property. I’m not allowed to take it, but I’m not obliged to give it to you if you don’t by default have it. Your welfare is your property. I’m not allowed to lessen what you have, but I don’t have to give you more of it.

[Edited to add: A basic system of property rights means assigning each thing to a person, who is then allowed to decide what happens to that thing. This gives rise to asymmetry because taking another person’s things is not allowed (since they are in charge of them, not you), but giving them more things is neutral (since you are in charge of your things and can do what you like with them).]

My guess is that these ethical asymmetries—which are confusing, because they defy consequentialism—are part of the mental equipment we have for upholding property rights.

In particular these well-known asymmetries seem to be explained well by property rights:

  • The act-omission distinction naturally arises where an act would involve taking someone else’s property (broadly construed—e.g. their life, their welfare), while an omission would merely fail to give them additional property (e.g. life that they are not by default going to have, additional welfare).
  • ‘The asymmetry’ between creating happy and miserable people is because to create a miserable person is to give that person something negative, which is to take away what they have, while creating a happy person is giving that person something extra.
  • Person-affecting views arise because birth gives someone a thing they don’t have, whereas death takes a thing from them.

Further evidence that these intuitive asymmetries are based on upholding property rights: we also have moral-feeling intuitions about more straightforward property rights. Stealing is wrong.

If I am right that we have these asymmetrical ethical intuitions as part of a scheme to uphold property rights, what would that imply?

It might imply something about when we want to uphold them, or consider them part of ethics, beyond their instrumental value. Property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources—which is to say, to somehow use the resources with low levels of conflict and destruction. They do not appear to be a system for people to achieve specific goals, e.g. whatever is actually good. Unless what is good is exactly the smooth sharing of resources.

I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons and just reason about the consequentialist value of upholding property rights? If we have the moral intuition, does that make the thing of moral value, regardless of its origins? Is pragmatic rules for social cohesion all that ethics is anyway? Questions for another time perhaps (when we are sorting out meta-ethics anyway).

A more straightforward implication is for how we try to explain these ethical asymmetries. If we have an intuition about an asymmetry which stems from upholding property rights, it would seem to be a mistake to treat it as evidence about an asymmetry in consequences, e.g. in value accruing to a person. For instance, perhaps I feel that I am not obliged to create a life, by having a child. Then—if I suppose that my intuitions are about producing goodness—I might think that creating a life is of neutral value, or is of no value to the created child. When in fact the intuition exists because allocating things to owners is a useful way to avoid social conflict. That intuition is part of a structure that is known to be agnostic about benefits to people from me giving them my stuff. If I’m right that these intuitions come from upholding property rights, this seems like an error that is actually happening.

Personal relationships with goodness

Many people seem to find themselves in a situation something like this:

  1. Good actions seem better than bad actions. Better actions seem better than worse actions.
  2. There seem to be many very good things to do—for instance, reducing global catastrophic risks, or saving children from malaria.
  3. Nonetheless, they continually do things that seem vastly less good, at least some of the time. For instance, just now I went and listened to a choir singing. You might also admire kittens, or play video games, or curl up in a ball, or watch a movie, or try to figure out whether the actress in the movie was the same one that you saw in a different movie. I’ll call this ‘indulgence’, though it is not quite the right category.

On the face of it, this is worrying. Why do you do the less good things? Is it because you prefer badness to goodness? Are you evil?

It would be nice to have some kind of a story about this. Especially if you are just going to keep on occasionally admiring kittens or whatever for years on end. I think people settle on different stories. These don’t have obviously different consequences, but I think they do have subtly different ones. Here are some stories I’m familiar with:

I’m not good: “My behavior is not directly related to goodness, and nor should it be”, “It would be good to do X, but I am not that good” “Doing good things rather than bad things is generally supererogatory”

I think this one is popular. I find it hard to stomach, because if I am not good that seems like a serious problem. Plus, if goodness isn’t the guide to my actions, it seems like I’m going to need some sort of concept like schmoodness to determine which things I should do. Plus I just care about being good for some idiosyncratic reason. But it seems actually dangerous, because not treating goodness as a guide to one’s actions seems like it might affect one’s actions pretty negatively, beyond excusing a bit of kitten admiring or choir attendance.

In its favor, this story can help with ‘leaving a line of retreat‘: maybe you can better think about what is good, honestly, if you aren’t going to be immediately compelled to do it. It also has the appealing benefit of not looking dishonest, hypocritical, or self-aggrandizing.

Goodness is hard: “I want to be good, but I fail due to weakness of will or some other mysterious force”

This one probably only matches one’s experience while actively trying to never indulge in anything, which seems rare as a long term strategy.

Indulgence is good: “I am good, but it is not psychologically sustainable to exist without admiring kittens. It really helps with productivity.” “I am good, and it is somehow important for me to admire kittens. I don’t know why, and it doesn’t sound that plausible, but I don’t expect anything good to happen if I investigate or challenge it”

This is nice, because you get to be good, and continue to pursue good things, and not feel endlessly bad about the indulgence.

It has the downside that it sounds a bit like an absurd rationalization—’of course I care about solving the most important problems, for instance, figuring out where the cutest kittens are on the internet’. Also, supposing that fruitless entertainments are indeed good, they are presumably only good in moderation, and so it is hard for observers to tell if you are doing too much, which will lead them to suspect that you are doing too much. Also, you probably can’t tell yourself if you are doing too much, and supposing that there is any kind of pressure to observe more kittens under the banner of ‘the best thing a person can do’, you might risk that happening.

I’m partly good; indulgence is part of compromise: “I am good, but I am a small part of my brain, and there are all these other pesky parts that are bad, and I’m reasonably compromising with them” “I have many parts, and at least one of them is good, and at least one of them wants to admire kittens.”

This has the upside of being arguably relatively accurate, and many of the downsides of the first story, but to a lesser degree.

Among these, there seems to be a basic conflict between being able to feel virtuous, and being able to feel honest and straightforward. Which I guess is what you get if you keep on doing apparently non-virtuous things. But given that stopping doing those things doesn’t seem to be a real option, I feel like it should be possible to have something close to both.

I am interested to hear about any other such accounts people might have heard of.

 

Realistic thought experiments

What if…

…after you died, you would be transported back and forth in time and get to be each of the other people who ever lived, one at a time, but with no recollection of your other lives?

…you had lived your entire life once already, and got to the end and achieved disappointingly few of your goals, and had now been given the chance to go back and try one more time?

…you were invisible and nobody would ever notice you? What if you were invisible and couldn’t even affect the world, except that you had complete control over a single human?

…you were the only person in the world, and you were responsible for the whole future, but luckily you had found a whole lot of useful robots which could expand your power, via for instance independently founding and running organizations for years without your supervision?

…you would only live for a second, before having your body taken over by someone else?

…there was a perfectly reasonable and good hypothetical being who knew about and judged all of your actions, hypothetically?

…everyone around you was naked under their clothes?

…in the future, many things that people around you asserted confidently would turn out to be false?

…the next year would automatically be composed of approximate copies of today?

…eternity would be composed of infinitely many exact copies of your life?

Added later:

…you just came into existence and got put into your present body—conveniently, with all the memories and skills of the body’s previous owner?

***

(Sometimes I or other people reframe the world for some philosophical or psychological purpose. These are the ones I can currently remember off the top of my head. Several are not original to me*. I’m curious to hear others.)

*Credits: #3 is from Plato and Joseph Carlsmith respectively. #5 is surely not original, but I can’t find its source easily. #7 is some kind of standard anti-social anxiety advice. #9 is from David Wong’s Cracked post on 5 ways you are sabotaging your own life (without even knowing it). #10 is old. #11 is from commenter Doug S, and elsewhere Nate Soares, and according to him is common advice on avoiding the Sunk Cost Fallacy.

The fundamental complementarity of consciousness and work

Matter can experience things. For instance, when it is a person. Matter can also do work, and thereby provide value to the matter that can experience things. For instance, when it is a machine. Or also, when it is a person.

An important question for what the future looks like, is whether it is more efficient to carry out these functions separately or together.

If separately, then perhaps it is best that we end up with a huge pile of unconscious machinery, doing all the work to support and please a separate collection of matter specializing in being pleased.

If together, then we probably end up with the value being had by the entities doing the work.

I think we see people assuming that it is more efficient to separate the activities of producing and consuming value. For instance, that the entities whose experiences matter in the future will ideally live a life of leisure. And that lab grown meat is a better goal than humane farming.

Which seems plausible. It is at least in line with the general observation that more efficient systems seem to be specialized.

However I think this isn’t obvious. Some reasons we might expect working and benefiting from work to be done by overlapping systems:

  • We don’t know which systems are conscious. It might be that highly efficient work systems tend to be unavoidably conscious. In which case, making their experience good rather than bad could be a relatively cheap way to improve the overall value of the world.
  • For humans, doing purposeful activities is satisfying, so much so that there are concerns about how humans will cope when they are replaced by machines. It might be hard for humans to avoid being replaced, since they are probably much less efficient than other possible machines. But if doing useful things tends to be gratifying for creatures—or for the kinds of creatures we decide are good to have—then it is less obvious that highly efficient creatures won’t be better off doing work themselves, rather than being separate from it.
  • Consciousness is presumably cheap and useful for getting something done, since we evolved to have it.
  • Efficient production doesn’t seem to evolve to be entirely specialized, especially if we take an abstract view of ‘production’. For instance, it is helpful to produce the experience of being a sports star alongside the joy of going to sports games.
  • Specialization seems especially helpful if keeping track of things is expensive. However technology will make that cheaper, so perhaps the world will tend less toward specialization than it currently seems. For instance, you would prefer plant an entire field of one vegetable than a mixture, because then when you harvest them, you can do it quickly without sorting them. But if sorting them is basically immediate and free, you might prefer to plant the mixture. For instance, if they take different nutrients from the soil, or if one wards of insects that would eat the other.

Strengthening the foundations under the Overton Window without moving it

As I understand them, the social rules for interacting with people you disagree with are like this:

  • You should argue with people who are a bit wrong
  • You should refuse to argue with people who are very wrong, because it makes them seem more plausibly right to onlookers

I think this has some downsides.

Suppose there is some incredibly terrible view, V. It is not an obscure view: suppose it is one of those things that most people believed two hundred years ago, but that is now considered completely unacceptable.

New humans are born and grow up. They are never acquainted with any good arguments for rejecting V, because nobody ever explains in public why it is wrong. They just say that it is unacceptable, and you would have to be a complete loser who is also the Devil to not see that.

Since it took the whole of humanity thousands of years to reject V, even if these new humans are especially smart and moral, they probably do not each have the resources to personally out-reason the whole of civilization for thousands of years. So some of them reject V anyway, because they do whatever society around them says is good person behavior. But some of the ones who rely more on their own assessment of arguments do not.

This is bad, not just because it leads to an unnecessarily high rate of people believing V, but because the very people who usually help get us out of believing stupid things – the ones who think about issues, and interrogate the arguments, instead of adopting whatever views they are handed – are being deprived of the evidence that would let them believe even the good things we already know.

In short: we don’t want to give the new generation the best sincere arguments against V, because that would be admitting that a reasonable person might believe V. Which seems to get in the way of the claim that V is very, very bad. Which is not only a true claim, but an important thing to claim, because it discourages people from believing V.

But we actually know that a reasonable person might believe V, if they don’t have access to society’s best collective thoughts on it. Because we have a whole history of this happening almost all of the time. On the upside, this does not actually mean that V isn’t very, very bad. Just that your standard non-terrible humans can believe very, very bad things sometimes, as we have seen.

So this all sounds kind of like the error where you refuse to go to the gym because it would mean admitting that you are not already incredibly ripped.

But what is the alternative? Even if losing popular understanding of the reasons for rejecting V is a downside, doesn’t it avoid the worse fate of making V acceptable by engaging people who believe it?

Well, note that the social rules were kind of self-fulfilling. If the norm is that  you only argue with people who are a bit wrong, then indeed if you argue with a very wrong person, people will infer that they are only a bit wrong. But if instead we had norms that said you should argue with people who are very wrong, then arguing with someone who was very wrong would not make them look only a bit wrong.

I do think the second norm wouldn’t be that stable. Even if we started out like that, we would probably get pushed to the equilibrium we are in, because for various reasons people are somewhat more likely to argue with people who are only a bit wrong, even before any signaling considerations come into play. Which makes arguing some evidence that you don’t think the person is too wrong. And once it is some evidence, then arguing makes it look a bit more like you think a person might be right. And then the people who loathe to look a bit more like that drop out of the debate, and so it becomes stronger evidence. And so on.

Which is to say, engaging V-believers does not intrinsically make V more acceptable. But society currently interprets it as a message of support for V. There are some weak intrinsic reasons to take this as a signal of support, which get magnified into it being a strong signal.

My weak guess is that this signal could still be overwhelmed by e.g. constructing some stronger reason to doubt that the message is one of support.

For instance, if many people agreed that there were problems with avoiding all serious debate around V, and accepted that it was socially valuable to sometimes make genuine arguments against views that are terrible, then prefacing your engagement with a reference to this motive might go a long way. Because nobody who actually found V plausible would start with ‘Lovely to be here tonight. Please don’t take my engagement as a sign of support or validation—I am actually here because I think Bob’s ideas are some of the least worthy of support and validation in the world, and I try to do the occasional prophylactic ludicrous debate duty. How are we all this evening?’