Tag Archives: altruism

Might law save us from uncaring AI?

Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.

When is it efficient to kill humans?

At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?

What does law do?

In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law  basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong. Continue reading

Generous people cross the street before the beggar

Robert Wiblin points to a study showing that the most generous people are the most keen to avoid situations where they will be generous, even though the people they would have helped will go without.

We conduct an experiment to demonstrate the importance of sorting in the context of social preferences. When individuals are constrained to play a dictator game, 74% of the subjects share. But when subjects are allowed to avoid the situation altogether, less than one third share. This reversal of proportions illustrates that the influence of sorting limits the generalizability of experimental findings that do not allow sorting. Moreover, institutions designed to entice pro-social behavior may induce adverse selection. We find that increased payoffs prevent foremost those subjects from opting out who share the least initially. Thus the impact of social preferences remains much lower than in a mandatory dictator game, even if sharing is subsidized by higher payoffs…

A big example of generosity inducing institutions causing adverse selection is market transactions with poor people.

For some reason we hold those who trade with another party responsible for that party’s welfare. We blame a company for not providing its workers with more, but don’t blame other companies for lack of charity to the same workers. This means that you can avoid responsibility to be generous by not trading with poor people.

Many consumers feel that if they are going to trade with poor people they should buy fair trade or thoroughly research the supplier’s niceness. However they don’t have the money or time for those, so instead just avoid buying from poor people. Only the less ethical remain to contribute to the purses of the poor.

Probably the kindest girl in my high school said to me once that she didn’t want a job where she would get rich because there are so many poor people in the world. I said that she should be rich and give the money to the poor people then. Nobody was wowed by this idea. I suspect something similar happens often with people making business and employment decisions. Those who have qualms about a line of business such as trade with poor people tend not to go into that, but opt for something guilt free already, while the less concerned do the jobs where compassion might help.

Why do animal lovers want animals to feel pain?

Behind the vail of (lots of) ignorance, would you rather squished chickens be painless?

Behind the vail of (lots of) ignorance, would you rather squished chickens be painless?

We may soon be able to make pain-free animals, according to New Scientist. The study they reported on finds that people not enthused by creating such creatures for scientific research, which is interesting. Robin Hanson guessed prior to seeing the article that this was because endorsing pain free animals would require thinking that farmed animals now were in more pain than wild animals, which people don’t think. However it turns out that vegetarians and animal welfare advocates were much more opposed to the idea than others in the study, so another explanation is needed.

Robert Wiblin suggested to me that vegetarians are mostly in favor of animals not being used, as well as not being hurt, so they don’t want to support pain-free use, as that is supporting use. He made this comparison:

Currently children are being sexually abused. The technology now exists to put them under anaesthetic so that they don’t experience the immediate pain of sexual abuse. Should we put children under anaesthetic to sexually abuse them?

A glance at the comments on other sites reporting the possibility of painless meat suggests vegetarians cite this along with a lot of different reasons for disapproval. And sure enough it seems mainly meat eaters who say eliminating pain would make them feel better about eating meat. The reasons vegetarians (and others) give for not liking the idea, or for not being more interested in pain-free meat, include:

  • The animals would harm themselves without knowing
  • Eating animals is bad for environmental or health reasons
  • Killing is always wrong
  • Animals have complex social lives and are sad when their family are killed, regardless of pain
  • Animals are living things [?!]
  • There are other forms of unpleasantness, such as psychological torture
  • How can we tell they don’t feel pain?
  • We will treat them worse if we think they can’t feel it, and we might be wrong
  • There are better solutions, such as not eating meat
  • It’s weird, freaky, disrespectful
  • It’s selfish and unnecessary for humans to do this to animals

Many reasonable reasons. The fascinating thing though is that vegetarians seem to consistently oppose the idea, yet not share reasons. Three (not mutually exclusive) explanations:

  1. Vegetarians care more about animals in general, so care about lots of related concerns.
  2. Once you have an opinion, you collect a multitude of reasons to have it. When I was a vegetarian I thought meat eating was bad for the environment, bad for people who need food, bad for me, maybe even bad for animals. This means when a group of people lose one reason to hold a shared belief they all have other reasons to put forward, but not necessarily the same ones.
  3. There’s some single reason vegetarians are especially motivated to oppose pain-free meat, so they each look for a reason to oppose it, and come across different ones, as there are many.

I’m interested by 3 because the situation reminds me of a pattern in similar cases I have noticed before. It goes like this. Some people make personal sacrifices, supposedly toward solving problems that don’t threaten them personally. They sort recycling, buy free range eggs, buy fair trade, campaign for wealth redistribution etc. Their actions are seen as virtuous. They see those who don’t join them as uncaring and immoral. A more efficient solution to the problem is suggested. It does not require personal sacrifice. People who have not previously sacrificed support it. Those who have previously sacrificed object on grounds that it is an excuse for people to get out of making the sacrifice.

The supposed instrumental action, as the visible sign of caring, has become virtuous in its own right. Solving the problem effectively is an attack on the moral people – an attempt to undermine their dream of a future where everybody longs to be very informed on the social and environmental effects of their consumption choices or to sort their recycling really well. Some examples of this sentiment:

  • A downside to recreating extinct species with cloning is that it will let people bother even less about stopping extinctions.
  • A recycling system where items are automatically and efficiently sorted at the plant rather than individually in homes would be worse because then people would be ignorant about the effort it takes to recycle.
  • Modern food systems lamentably make people lazy and ignorant of where their food comes from.
  • Making cars efficient just lets people be lazy and drive them more, rather than using real solutions like bikes.
  • The internet’s ready availability and general knowledge allows people to be ignorant and not bother learning facts.

In these cases, having solved a problem a better way should mean that efforts to solve it via personal sacrifice can be lessened. This would be a good thing if we wanted to solve the problem, and didn’t want to sacrifice. We would rejoice at progress allowing ever more ignorance and laziness on a given issue. But often we instead regret the end of an opportunity to show compassion and commitment. Especially when we were the compassionate, committed ones.

Is vegetarian opposition to preventing animal pain an example of this kind of motivation? Vegetarianism is a big personal effort, a moral issue, a cause of feelings of moral superiority, and a feature of identity which binds people together. It looks like other issues where people readily claim fear of an end to virtuous efforts.  How should we distinguish between this and the other explanations?

Charitable explanation

Is anyone really altruistic? The usual cynical explanations for seemingly altruistic behavior are that it makes one feel good, it makes one look good, and it brings other rewards later. These factors are usually present, but how much do they contribute to motivation?

One way to tell if it’s all about altruism is to invite charity that explicitly won’t benefit anyone. Curious economists asked their guinea pigs for donations to a variety of causes, warning them:

“The amount contributed by the proctor to your selected charity WILL be reduced by however much you pass to your selected charity. Your selected charity will receive neither more nor less than $10.”

Many participants chipped in nonetheless:

We find that participants, on average, donated 20% of their endowments and that approximately 57% of the participants made a donation.

This is compared to giving an average of 30-49% in experiments where donating benefited the cause, but it is of course possible that knowing you are helping offers more of a warm glow. It looks like at least half of giving isn’t altruistic at all, unless the participants were interested in the wellbeing of the experimenters’ funds.

The opportunity to be observed by others also influences how much we donate, and we are duly rewarded with reputation:

Here we demonstrate that more subjects were willing to give assistance to unfamiliar people in need if they could make their charity offers in the presence of their group mates than in a situation where the offers remained concealed from others. In return, those who were willing to participate in a particular charitable activity received significantly higher scores than others on scales measuring sympathy and trustworthiness.

This doesn’t tell us whether real altruism exists though. Maybe there are just a few truly altruistic deeds out there? What would a credibly altruistic act look like?

Fortunately for cute children desirous of socially admirable help, much charity is not driven by altruism (picture: Laura Lartigue)

Fortunately for cute children desirous of socially admirable help, much charity is not driven by altruism (picture: Laura Lartigue)

If an act made the doer feel bad, look bad to others, and endure material cost, while helping someone else, we would probably be satisfied that it was altruistic. For instance if a person killed their much loved grandmother to steal her money to donate to a charity they believed would increase the birth rate somewhere far away, at much risk to themselves, it would seem to escape the usual criticisms. And there is no way you would want to be friends with them.

So why would anyone tell you if they had good evidence they had been altruistic? The more credible evidence should look particularly bad. And if they were keen to tell you about it anyway, you would have to wonder whether it was for show after all. This makes it hard for an altruist to credibly inform anyone that they were altruistic. On the other hand the non-altruistic should be looking for any excuse to publicize their good deeds. This means the good deeds you hear about should be very biased toward the non-altruistic. Even if altruism were all over the place it should be hard to find. But it’s not, is it?

Is your subconscious communist?

People can be hard to tell apart, even to themselves (picture: Giustino)

People can be hard to tell apart, even to themselves (picture: Giustino)

Humans make mental models of other humans automatically, and appear to get somewhat confused about who is who at times.  This happens with knowledge, actions, attention and feelings:

Just having another person visible hinders your ability to say what you can see from where you stand, though considering a non-human perspective does not:

[The] participants were also significantly slower in verifying their own perspective when the avatar’s perspective was incongruent. In Experiment 2, we found that the avatar’s perspective intrusion effect persisted even when participants had to repeatedly verify their own perspective within the same block. In Experiment 3, we replaced the avatar by a bicolor stick …[and then] the congruency of the local space did not influence participants’ response time when they verified the number of circles presented in the global space.

Believing you see a person moving can impede you in moving differently, similar to rubbing your tummy while patting your head, but if you believe the same visual stimulus is not caused by a person, there is no interference:

[A] dot display followed either a biologically plausible or implausible velocity profile. Interference effects due to dot observation were present for both biological and nonbiological velocity profiles when the participants were informed that they were observing prerecorded human movement and were absent when the dot motion was described as computer generated…

Doing  a task where the cues to act may be incongruent with the actions (a red pointer signals that you should press the left button, whether the pointer points left or right, and a green pointer signals right), the incongruent signals take longer to respond to than the congruent ones. This stops when you only have to look after one of the buttons. But if someone else picks up the other button, it becomes harder once again to do incongruent actions:

The identical task was performed alone and alongside another participant. There was a spatial compatibility effect in the group setting only. It was similar to the effect obtained when one person took care of both responses. This result suggests that one’s own actions and others’ actions are represented in a functionally equivalent way.

You can learn to subconsciously fear a stimulus by seeing the stimulus and feeling pain, but not by being told about it. However seeing the stimulus and watching someone react to pain, works like feeling it yourself:

In the Pavlovian group, the CS1 was paired with a mild shock, whereas the observational-learning group learned through observing the emotional expression of a confederate receiving shocks paired with the CS1. The instructed-learning group was told that the CS1 predicted a shock…As in previous studies, participants also displayed a significant learning response to masked [too fast to be consciously perceived] stimuli following Pavlovian conditioning. However, whereas the observational-learning group also showed this effect, the instructed-learning group did not.

A good summary of all this, Implicit and Explicit Processes in Social Cognition, interprets that we are subconsciously nice:

Many studies show that implicit processes facilitate
the sharing of knowledge, feelings, and actions, and hence, perhaps surprisingly, serve altruism rather
than selfishness. On the other hand, higher-level conscious processes are as likely to be selfish as prosocial.

…implicit processes facilitate the sharing of knowledge, feelings, and actions, and hence, perhaps surprisingly, serve altruism rather than selfishness. On the other hand, higher-level conscious processes are as likely to be selfish as prosocial.

It’s true that these unconscious behaviours can help us cooperate, but it seems they are no more ‘altruistic’ than the two-faced conscious processes the authors cite as evidence for conscious selfishness. Our subconsciouses are like the rest of us; adeptly ‘altruistic’ when it benefits them, such as when watched. For an example of how well designed we are in this regard consider the automatic empathic expression of pain we make upon seeing someone hurt. When we aren’t being watched, feeling other people’s pain goes out the window:

…A 2-part experiment with 50 university students tested the hypothesis that motor mimicry is instead an interpersonal event, a nonverbal communication intended to be seen by the other….The victim of an apparently painful injury was either increasingly or decreasingly available for eye contact with the observer. Microanalysis showed that the pattern and timing of the observer’s motor mimicry were significantly affected by the visual availability of the victim.