Person-moment affecting views

[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]

Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:

  1. World A can only be better than world B insofar as it is better for someone.
  2. World A can’t be better than world B for Alice, if Alice exists in world A but not world B.

The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.

I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).

If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.

Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:

  1. How good different worlds are depends strongly on which definition of ‘person’ you choose (which person moments you choose to cluster together), but this is a somewhat arbitrary pragmatic choice
  2. There is some correct definition of ‘person’ for the purpose of ethics (i.e. there is some relation between person moments that makes different person moments in the future ethically relevant by virtue of having that connection to a present person moment)
  3. Different person-moments are more or less closely connected in ways, and a person-affecting view should actually have a sliding scale of importance for different person-moments

Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:

  1. Alice painting more paintings. If Alice painted three paintings in world A, and doesn’t exist in world B, I think most people would say that Alice painted more paintings in world A than in world B. And more clearly, that world A has more paintings than world B, even if we insist that a world can’t have more paintings without somebody in particular having painted more paintings. Relatedly, there are many things people do where the sentence ‘If Alice didn’t exist, she wouldn’t have X’.
  2. Alice having painted more paintings per year. If Alice painted one painting every thirty years in world A, and didn’t exist in world B, in world B the number of paintings per year is undefined, and so incomparable to ‘one per thirty years’.

Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?

Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?

I see several possible responses:

  1. No we can’t. We should have person-moment affecting views.
  2. Things can’t be better or worse for person-moments, only for entire people, holistically across their lives, so the question is meaningless. (Or relatedly, how good a thing is for a person is not a function of how good it is for their person-moments, and it is how good it is for the person that matters).
  3. Yes, there is some difference between people and person moments, which means that person-moments can benefit without existing in worlds that they are benefitting relative to, but people cannot.

The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.

So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?

Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.

I think an interesting  feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.

So, things that are never directly valuable in this world:

  • Joy
  • Someone getting what they want and also knowing about it
  • Anything that isn’t a meddling preference

On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.

So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:

  1. Person-moments can benefit without having an otherworldly counterpart, even though people cannot. Which is to say, only person-moments that are part of the same ‘person’ in different worlds can benefit from their existence. ‘Person’ here is either an arbitrary pragmatic definition choice, or some more fundamental ethically relevant version of the concept that we could perhaps discover.
  2. Benefits accrue to persons, not person-moments. In particular, benefits to persons are not a function of the benefits to their constituent person-moments. Where ‘person’ is again either a somewhat arbitrary choice of definition, or a more fundamental concept.
  3. A sliding scale of ethical relevance of different person-moments, based on how narrow a definition of ‘person’ unites them with any currently existing person-moments. Along with some story about why, given that you can apparently compare all of them, you are still weighting some less, on grounds that they are incomparable.
  4. Person-moment affecting views

None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.

Replacing expensive costly signals

I feel like there is a general problem where people signal something using some extremely socially destructive method, and we can conceive of more socially efficient ways to send the same signal, but trying out alternative signals suggests that you might be especially bad at the traditional one. For instance, an employer might reasonably suspect that a job candidate who did a strange online course instead of normal university would have done especially badly at normal university.

Here is a proposed solution. Let X be the traditional signal, Y be the new signal, and Z be the trait(s) being advertised by both. Let people continue doing X, but subsidize Y on top of X for people with very high Z. Soon Y is a signal of higher Z than X is, and understood by the recipients of the signals to be a better indicator. People who can’t afford to do both should then prefer Y to X, since Y is is a stronger signal, and since it is more socially efficient it is likely to be less costly for the signal senders.

If Y is intrinsically no better a signal than X (without your artificially subsidizing great Z-possessors to send it) then in the long run Y might only end up as strong a sign as X, but in the process, many should have moved to using Y instead.

(A possible downside is that people may end up just doing both forever.)

For example, if you developed a psychometric and intellectual test that only took half a day and predicted very well how someone would do in an MIT undergraduate degree, you could run it for a while for people who actually do MIT undergraduate degrees, offering prizes for high performance, or just subsidizing taking it at all. After the best MIT graduates say on their CVs for a while that they also did well on this thing and got a prize, it is hopefully an established metric, and an employer would as happily have someone with the degree as with a great result on your test. At which point an impressive and ambitious high school leaver would take the test, modulo e.g. concerns that the test doesn’t let you hang out with other MIT undergraduates for four years.

I don’t know if this is the kind of problem people actually have with replacing apparently wasteful signaling systems with better things. Or if this doesn’t actually work after thinking about it for more than an hour. But just in case.

The Principled Intelligence Hypothesis

I have been reading the thought provoking Elephant in the Brain, and will probably have more to say on it later. But if I understand correctly, a dominant theory of how humans came to be so smart is that they have been in an endless cat and mouse game with themselves, making norms and punishing violations on the one hand, and cleverly cheating their own norms and excusing themselves on the other (the ‘Social Brain Hypothesis’ or ‘Machiavellian Intelligence Hypothesis’). Intelligence purportedly evolved to get ourselves off the hook, and our ability to construct rocket ships and proofs about large prime numbers are just a lucky side product.

As a person who is both unusually smart, and who spent the last half hour wishing the seatbelt sign would go off so they could permissibly use the restroom, I feel like there is some tension between this theory and reality. I’m not the only unusually smart person who hates breaking rules, who wishes there were more rules telling them what to do, who incessantly makes up rules for themselves, who intentionally steers clear of borderline cases because it would be so annoying to think about, and who wishes the nominal rules were policed predictably and actually reflected expected behavior. This is a whole stereotype of person.

But if intelligence evolved for the prime purpose of evading rules, shouldn’t the smartest people be best at navigating rule evasion? Or at least reliably non-terrible at it? Shouldn’t they be the most delighted to find themselves in situations where the rules were ambiguous and the real situation didn’t match the claimed rules? Shouldn’t the people who are best at making rocket ships and proofs also be the best at making excuses and calculatedly risky norm-violations? Why is there this stereotype that the more you can make rocket ships, the more likely you are to break down crying if the social rules about when and how you are allowed to make rocket ships are ambiguous?

It could be that these nerds are rare, yet salient for some reason. Maybe such people are funny, not representative. Maybe the smartest people are actually savvy. I’m told that there is at least a positive correlation between social skills and other intellectual skills.

I offer a different theory. If the human brain grew out of an endless cat and mouse game, what if the thing we traditionally think of as ‘intelligence’ grew out of being the cat, not the mouse?

The skill it takes to apply abstract theories across a range of domains and to notice places where reality doesn’t fit sounds very much like policing norms, not breaking them. The love of consistency that fuels unifying theories sounds a lot like the one that insists on fair application of laws, and social codes that can apply in every circumstance. Math is basically just the construction of a bunch of rules, and then endless speculation about what they imply. A major object of science is even called discovering ‘the laws of nature’.

Rules need to generalize across a lot of situations—you will have a terrible time as rule-enforcer if you see every situation as having new, ad-hoc appropriate behavior. We wouldn’t even call this having a ‘rule’. But more to the point, when people bring you their excuses, if your rule doesn’t already imply an immovable position on every case you have never imagined, then you are open to accepting excuses. So you need to see the one law manifest everywhere. I posit that technical intelligence comes from the drive to make these generalizations, not the drive to thwart them.

On this theory, probably some other aspects of human skill are for evading norms. For instance, perhaps social or emotional intelligence (I hear these are things, but will not pretend to know much about them). If norm-policing and norm-evading are somewhat different activities, we might expect to have at least two systems that are engorged by this endless struggle.

I think this would solve another problem: if we came to have intelligence for cheating each other, it is unclear why general intelligence per se is is the answer to this, but not to other problems we have ever had as animals. Why did we get mental skills this time rather than earlier? Like that time we were competing over eating all the plants, or escaping predators better than our cousins? This isn’t the only time that a species was in fierce competition against themselves for something. In fact that has been happening forever. Why didn’t we develop intelligence to compete against each other for food, back when we lived in the sea? If the theory is just ‘there was strong competitive pressure for something that will help us win, so out came intelligence’, I think there is a lot left unexplained. Especially since the thing we most want to explain is the spaceship stuff, that on this theory is a random side effect anyway. (Note: I may be misunderstanding the usual theory, as a result of knowing almost nothing about it.)

I think this Principled Intelligence Hypothesis does better. Tracking general principles and spotting deviations from them is close to what scientific intelligence is, so if we were competing to do this (against people seeking to thwart us) it would make sense that we ended up with good theory-generalizing and deviation-spotting engines.

On the other hand, I think there are several reasons to doubt this theory, or details to resolve. For instance, while we are being unnecessarily norm-abiding and going with anecdotal evidence, I think I am actually pretty great at making up excuses, if I do say so. And I feel like this rests on is the same skill as ‘analogize one thing to another’ (my being here to hide from a party could just as well be interpreted as my being here to look for the drinks, much as the economy could also be interpreted as a kind of nervous system), which seems like it is quite similar to the skill of making up scientific theories (these five observations being true is much like theory X applying in general), though arguably not the skill of making up scientific theories well. So this is evidence against smart people being bad at norm evasion in general, and against norm evasion being a different kind of skill to norm enforcement, which is about generalizing across circumstances.

Some other outside view evidence against this theory’s correctness is that my friends all think it is wrong, and I know nothing about the relevant literature. I think it could also do with some inside view details – for instance, how exactly does any creature ever benefit from enforcing norms well? Isn’t it a bit of a tragedy of the commons? If norm evasion and norm policing skills vary in a population of agents, what happens over time? But I thought I’d tell you my rough thoughts, before I set this aside and fail to look into any of those details for the indefinite future.

Why everything might have taken so long

I asked why humanity took so long to do anything at the start, and the Internet gave me its thoughts. Here is my expanded list of hypotheses, summarizing from comments on the post, here, and here.

Inventing is harder than it looks

  1. Inventions are usually more ingenious than they seem. Relatedly, reality has a lot of detail.
  2. There are lots of apparent paths: without hindsight, you have to waste a lot of time on dead ends.
  3. People are not as inventive as they imagine. For instance, I haven’t actually invented anything – why do I even imagine I could invent rope?
  4. Posing the question is a large part of the work. If you have never seen rope, it actually doesn’t occur to you that rope would come in handy, or to ask yourself how to make some.
  5. Animals (including humans) mostly think by intuitively recognizing over time what is promising and not among affordances they have, and reading what common observations imply. New affordances generally only appear by some outside force e.g. accidentally. To invent a thing, you have to somehow have an affordance to make it even though you have never seen it. And in retrospect it seems so obvious because now you do have the affordance.

People fifty thousand years ago were not really behaviorally modern

  1. People’s brains were actually biologically less functional fifty thousand years ago.
  2. Having concepts in general is a big deal. You need a foundation of knowledge and mental models to come up with more of them.
  3. We lacked a small number of unimaginably basic concepts that it is hard to even imagine not having now. For instance ‘abstraction’, or ‘changing the world around you to make it better’.
  4. Having external thinking tools is a big deal. Modern ‘human intelligence’ relies a lot on things like writing and collected data, that aren’t in anyone’s brain.
  5. The entire mental landscapes of early people was very different, as Julian Jaynes suggests.  In particular, they lacked self awareness and the ability to have original thought rather than just repeating whatever they usually repeat.

Prerequisites

  1. Often A isn’t useful without B, and B isn’t useful without A. For instance, A is chariots and B is roads.
  2. A isn’t useful without lots of other things, which don’t depend on A, but take longer to accrue than you imagine.
  3. Lots of ways to solve problems don’t lead to great things in the long run. ‘Crude hacks’ get you most of the way there, reducing the value of great inventions.

Nobody can do much at all

  1. People in general are stupid in all domains, even now. Everything is always mysteriously a thousand times harder than you might think.
  2. Have I tried even making rope from scratch? Let alone inventing it?

People were really busy

  1. Poverty traps. Inventing only pays off long term, so for anyone to do it you need spare wealth and maybe institutions for capital to fund invention.
  2. People are just really busy doing and thinking about other things. Like mating and dancing and eating and so on.

Communication and records

  1. The early humans did have those things, we just don’t have good records. Which is not surprising, because our records of those times are clearly very lacking.
  2. Things got invented a lot, but communication wasn’t good/common enough to spread them. For instance because tribes were small and didn’t interact that much).

Social costs

  1. Technology might have been seen as a sign of weakness or laziness
  2. Making technology might make you stand out rather than fit in
  3. Productivity shames your peers and invites more work from you
  4. Inventions are sometimes against received wisdom

Population

  1. There were very few people in the past, so the total thinking occurring between 50k and 28k years ago was less than in the last hundred years.

Value

  1. We didn’t invent things until they became relevant at all, and most of these things aren’t relevant to a hunter-gatherer.
  2. Innovation is risky: if you try a new thing, you might die.

Orders of invention

  1. First order inventions are those where the raw materials are in your immediate surroundings, and they don’t require huge amounts of skill. My intuition is mostly that first order inventions should have been faster. But maybe we did get very good at first order ones quickly, but it is hard to move to higher orders.
  2. You need a full-time craftsman to make most basic things to a quality where they are worth having, and we couldn’t afford full-time craftsmen for a very long time.
  3. Each new layer requires the last layer of innovation be common enough that it is available everywhere, for the next person to use.

Why did everything take so long?

One of the biggest intuitive mysteries to me is how humanity took so long to do anything.

Humans have been ‘behaviorally modern’ for about 50 thousand years. And apparently didn’t invent, for instance:

This kind of thing seems really weird introspectively, because it is hard to imagine going a whole lifetime in the wilderness without wanting something like rope, or going a whole day wanting something like rope without figuring out how to make something like rope. Yet apparently people went for about a thousand lifetimes without that happening.

Some possible explanations:

  1. Inventions are usually more ingenious than they seem. LiveScience argues that it took so long to invent the wheel because “The tricky thing about the wheel is not conceiving of a cylinder rolling on its edge. It’s figuring out how to connect a stable, stationary platform to that cylinder.” I feel like that would explain why it took a month rather than a day. But a couple of thousand lifetimes?
  2. Knowing what you are looking for is everything. If you sat a person down and said, “look, how do you attach a stationary platform to a rolling thing?” they could figure it out within a few hours, but if you just give them the world, they don’t think about whether a stationary platform attached to a rolling thing would be useful, so “how do you attach a stationary platform to a rolling thing” doesn’t come up as a salient question for a couple of thousand lifetimes.
  3. Having concepts in general is a big deal, and being an early human who had never heard of any invention was a bit like being me when I’m half asleep.
  4. Everything is always mysteriously a thousand times harder than you might think. Consider writing a blog post. Why haven’t I written a blog post in a month?
  5. Others?