Worth keeping

(Epistemic status: quick speculation which matches my intuitions about how social things go, but which I hadn’t explicitly described before, and haven’t checked.)

If your car gets damaged, should you invest more or less in it going forward? It could go either way. The car needs more investment to be in good condition, so maybe you do that. But the car is worse than you thought, so maybe you start considering a new car, or putting your dollars into Uber instead.

If you are writing an essay and run into difficulty describing something, you can put in additional effort to find the right words, or you can suspect that this is not going to be a great essay, and either give up, or prepare to get it out quickly and imperfectly, worrying less about the other parts that don’t quite work.

When something has a problem, you always choose whether to double down with it or to back away.

(Or in the middle, to do a bit of both: to fix the car this time, but start to look around for other cars.)

I’m interested in this as it pertains to people. When a friend fails, do you move toward them—to hold them, talk to them, pick them up at your own expense—or do you edge away? It probably depends on the friend (and the problem). If someone embarrasses themselves in public, do you sully your own reputation to stand up for their worth? Or do you silently hope not to be associated with them? If they are dying, do you hold their hand, even if it destroys you? Or do you hope that someone else is doing that, and become someone they know less well?

Where a person fits on this line would seem to radically change their incentives around you. Someone firmly in your ‘worth keeping’ zone does better to let you see their problems than to hide them. Because you probably won’t give up on them, and you might help. Since everyone has problems, and they take effort to hide, this person is just a lot freer around you. If instead every problem hastens a person’s replacement, they should probably not only hide their problems, but also many of their other details, which are somehow entwined with problems.

(A related question is when you should let people know where they stand with you. Prima facie, it seems good to make sure people know when they are safe. But that means it also being clearer when a person is not safe, which has downsides.)

If there are better replacements in general, then you will be inclined to replace things more readily. If you can press a button to have a great new car appear, then you won’t have the same car for long.

The social analog is that in a community where friends are more replaceable—for instance, because everyone is extremely well selected to be similar on important axes—it should be harder to be close to anyone, or to feel safe and accepted. Even while everyone is unusually much on the same team, and unusually well suited to one another.

Are ethical asymmetries from property rights?

These are some intuitions people often have:

  • You are not required to save a random person, but you are definitely not allowed to kill one
  • You are not required to create a person, but you are definitely not allowed to kill one
  • You are not required to create a happy person, but you are definitely not allowed to create a miserable one
  • You are not required to help a random person who will be in a dire situation otherwise, but you are definitely not allowed to put someone in a dire situation
  • You are not required to save a person in front of a runaway train, but you are definitely not allowed to push someone in front of a train. By extension, you are not required to save five people in front of a runaway train, and if you have to push someone in front of the train to do it, then you are not allowed.

Here are some more:

  • You are not strongly required to give me your bread, but you are not allowed to take mine
  • You are not strongly required to lend me your car, but you are not allowed to unilaterally borrow mine
  • You are not strongly required to send me money, but you are not allowed to take mine

The former are ethical intuitions. The latter are implications of a basic system of property rights. Yet they seem very similar. The ethical intuitions seem to just be property rights as applied to lives and welfare. Your life is your property. I’m not allowed to take it, but I’m not obliged to give it to you if you don’t by default have it. Your welfare is your property. I’m not allowed to lessen what you have, but I don’t have to give you more of it.

[Edited to add: A basic system of property rights means assigning each thing to a person, who is then allowed to decide what happens to that thing. This gives rise to asymmetry because taking another person’s things is not allowed (since they are in charge of them, not you), but giving them more things is neutral (since you are in charge of your things and can do what you like with them).]

My guess is that these ethical asymmetries—which are confusing, because they defy consequentialism—are part of the mental equipment we have for upholding property rights.

In particular these well-known asymmetries seem to be explained well by property rights:

  • The act-omission distinction naturally arises where an act would involve taking someone else’s property (broadly construed—e.g. their life, their welfare), while an omission would merely fail to give them additional property (e.g. life that they are not by default going to have, additional welfare).
  • ‘The asymmetry’ between creating happy and miserable people is because to create a miserable person is to give that person something negative, which is to take away what they have, while creating a happy person is giving that person something extra.
  • Person-affecting views arise because birth gives someone a thing they don’t have, whereas death takes a thing from them.

Further evidence that these intuitive asymmetries are based on upholding property rights: we also have moral-feeling intuitions about more straightforward property rights. Stealing is wrong.

If I am right that we have these asymmetrical ethical intuitions as part of a scheme to uphold property rights, what would that imply?

It might imply something about when we want to uphold them, or consider them part of ethics, beyond their instrumental value. Property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources—which is to say, to somehow use the resources with low levels of conflict and destruction. They do not appear to be a system for people to achieve specific goals, e.g. whatever is actually good. Unless what is good is exactly the smooth sharing of resources.

I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons and just reason about the consequentialist value of upholding property rights? If we have the moral intuition, does that make the thing of moral value, regardless of its origins? Is pragmatic rules for social cohesion all that ethics is anyway? Questions for another time perhaps (when we are sorting out meta-ethics anyway).

A more straightforward implication is for how we try to explain these ethical asymmetries. If we have an intuition about an asymmetry which stems from upholding property rights, it would seem to be a mistake to treat it as evidence about an asymmetry in consequences, e.g. in value accruing to a person. For instance, perhaps I feel that I am not obliged to create a life, by having a child. Then—if I suppose that my intuitions are about producing goodness—I might think that creating a life is of neutral value, or is of no value to the created child. When in fact the intuition exists because allocating things to owners is a useful way to avoid social conflict. That intuition is part of a structure that is known to be agnostic about benefits to people from me giving them my stuff. If I’m right that these intuitions come from upholding property rights, this seems like an error that is actually happening.

Personal relationships with goodness

Many people seem to find themselves in a situation something like this:

  1. Good actions seem better than bad actions. Better actions seem better than worse actions.
  2. There seem to be many very good things to do—for instance, reducing global catastrophic risks, or saving children from malaria.
  3. Nonetheless, they continually do things that seem vastly less good, at least some of the time. For instance, just now I went and listened to a choir singing. You might also admire kittens, or play video games, or curl up in a ball, or watch a movie, or try to figure out whether the actress in the movie was the same one that you saw in a different movie. I’ll call this ‘indulgence’, though it is not quite the right category.

On the face of it, this is worrying. Why do you do the less good things? Is it because you prefer badness to goodness? Are you evil?

It would be nice to have some kind of a story about this. Especially if you are just going to keep on occasionally admiring kittens or whatever for years on end. I think people settle on different stories. These don’t have obviously different consequences, but I think they do have subtly different ones. Here are some stories I’m familiar with:

I’m not good: “My behavior is not directly related to goodness, and nor should it be”, “It would be good to do X, but I am not that good” “Doing good things rather than bad things is generally supererogatory”

I think this one is popular. I find it hard to stomach, because if I am not good that seems like a serious problem. Plus, if goodness isn’t the guide to my actions, it seems like I’m going to need some sort of concept like schmoodness to determine which things I should do. Plus I just care about being good for some idiosyncratic reason. But it seems actually dangerous, because not treating goodness as a guide to one’s actions seems like it might affect one’s actions pretty negatively, beyond excusing a bit of kitten admiring or choir attendance.

In its favor, this story can help with ‘leaving a line of retreat‘: maybe you can better think about what is good, honestly, if you aren’t going to be immediately compelled to do it. It also has the appealing benefit of not looking dishonest, hypocritical, or self-aggrandizing.

Goodness is hard: “I want to be good, but I fail due to weakness of will or some other mysterious force”

This one probably only matches one’s experience while actively trying to never indulge in anything, which seems rare as a long term strategy.

Indulgence is good: “I am good, but it is not psychologically sustainable to exist without admiring kittens. It really helps with productivity.” “I am good, and it is somehow important for me to admire kittens. I don’t know why, and it doesn’t sound that plausible, but I don’t expect anything good to happen if I investigate or challenge it”

This is nice, because you get to be good, and continue to pursue good things, and not feel endlessly bad about the indulgence.

It has the downside that it sounds a bit like an absurd rationalization—’of course I care about solving the most important problems, for instance, figuring out where the cutest kittens are on the internet’. Also, supposing that fruitless entertainments are indeed good, they are presumably only good in moderation, and so it is hard for observers to tell if you are doing too much, which will lead them to suspect that you are doing too much. Also, you probably can’t tell yourself if you are doing too much, and supposing that there is any kind of pressure to observe more kittens under the banner of ‘the best thing a person can do’, you might risk that happening.

I’m partly good; indulgence is part of compromise: “I am good, but I am a small part of my brain, and there are all these other pesky parts that are bad, and I’m reasonably compromising with them” “I have many parts, and at least one of them is good, and at least one of them wants to admire kittens.”

This has the upside of being arguably relatively accurate, and many of the downsides of the first story, but to a lesser degree.

Among these, there seems to be a basic conflict between being able to feel virtuous, and being able to feel honest and straightforward. Which I guess is what you get if you keep on doing apparently non-virtuous things. But given that stopping doing those things doesn’t seem to be a real option, I feel like it should be possible to have something close to both.

I am interested to hear about any other such accounts people might have heard of.


Realistic thought experiments

What if…

…after you died, you would be transported back and forth in time and get to be each of the other people who ever lived, one at a time, but with no recollection of your other lives?

…you had lived your entire life once already, and got to the end and achieved disappointingly few of your goals, and had now been given the chance to go back and try one more time?

…you were invisible and nobody would ever notice you? What if you were invisible and couldn’t even affect the world, except that you had complete control over a single human?

…you were the only person in the world, and you were responsible for the whole future, but luckily you had found a whole lot of useful robots which could expand your power, via for instance independently founding and running organizations for years without your supervision?

…you would only live for a second, before having your body taken over by someone else?

…there was a perfectly reasonable and good hypothetical being who knew about and judged all of your actions, hypothetically?

…everyone around you was naked under their clothes?

…in the future, many things that people around you asserted confidently would turn out to be false?

…the next year would automatically be composed of approximate copies of today?

…eternity would be composed of infinitely many exact copies of your life?

Added later:

…you just came into existence and got put into your present body—conveniently, with all the memories and skills of the body’s previous owner?


(Sometimes I or other people reframe the world for some philosophical or psychological purpose. These are the ones I can currently remember off the top of my head. Several are not original to me*. I’m curious to hear others.)

*Credits: #3 is from Plato and Joseph Carlsmith respectively. #5 is surely not original, but I can’t find its source easily. #7 is some kind of standard anti-social anxiety advice. #9 is from David Wong’s Cracked post on 5 ways you are sabotaging your own life (without even knowing it). #10 is old. #11 is from commenter Doug S, and elsewhere Nate Soares, and according to him is common advice on avoiding the Sunk Cost Fallacy.

The fundamental complementarity of consciousness and work

Matter can experience things. For instance, when it is a person. Matter can also do work, and thereby provide value to the matter that can experience things. For instance, when it is a machine. Or also, when it is a person.

An important question for what the future looks like, is whether it is more efficient to carry out these functions separately or together.

If separately, then perhaps it is best that we end up with a huge pile of unconscious machinery, doing all the work to support and please a separate collection of matter specializing in being pleased.

If together, then we probably end up with the value being had by the entities doing the work.

I think we see people assuming that it is more efficient to separate the activities of producing and consuming value. For instance, that the entities whose experiences matter in the future will ideally live a life of leisure. And that lab grown meat is a better goal than humane farming.

Which seems plausible. It is at least in line with the general observation that more efficient systems seem to be specialized.

However I think this isn’t obvious. Some reasons we might expect working and benefiting from work to be done by overlapping systems:

  • We don’t know which systems are conscious. It might be that highly efficient work systems tend to be unavoidably conscious. In which case, making their experience good rather than bad could be a relatively cheap way to improve the overall value of the world.
  • For humans, doing purposeful activities is satisfying, so much so that there are concerns about how humans will cope when they are replaced by machines. It might be hard for humans to avoid being replaced, since they are probably much less efficient than other possible machines. But if doing useful things tends to be gratifying for creatures—or for the kinds of creatures we decide are good to have—then it is less obvious that highly efficient creatures won’t be better off doing work themselves, rather than being separate from it.
  • Consciousness is presumably cheap and useful for getting something done, since we evolved to have it.
  • Efficient production doesn’t seem to evolve to be entirely specialized, especially if we take an abstract view of ‘production’. For instance, it is helpful to produce the experience of being a sports star alongside the joy of going to sports games.
  • Specialization seems especially helpful if keeping track of things is expensive. However technology will make that cheaper, so perhaps the world will tend less toward specialization than it currently seems. For instance, you would prefer plant an entire field of one vegetable than a mixture, because then when you harvest them, you can do it quickly without sorting them. But if sorting them is basically immediate and free, you might prefer to plant the mixture. For instance, if they take different nutrients from the soil, or if one wards of insects that would eat the other.