Moral progress enhancement

[Epistemic status: speculation]

If moral progress is so important, probably we should try to improve it.

1. Why have ordinary people been immoral en masse?

From a previous post:

I would like to think I wouldn’t have been friends with slave owners, anti-semites or wife-beaters, but then again most of my friends couldn’t give a damn about the suffering of animals, so I guess I would have been. – Robert Wiblin

I expect the same friends would have been any of those things too, given the right place and period of history. The same ‘faults’ appear to be responsible for most old fashioned or foreign moral failings: not believing that anything bad is happening if you don’t feel bad about it, and not feeling bad about anything unless there is a social norm of feeling bad about it.

That is, I claim that the procedure individuals use for morality has these key components:

  1. Conformist moral affect: people have moral feelings, and these mostly reflect what their peers deem right or wrong.
  2. Dictatorship of moral affect: moral feelings directly determine what people endorse.

So for instance if everyone around tortures puppies, most people consequently feel ok about puppy torture. And then if you feel ok torturing puppies, you assume that you are in fact ok with it, rather than for instance doing an extra step of conscious deliberation to check this.

(You might wonder what you would be consciously deliberating here: I’m not taking a stance on ethics or meta-ethics, but I think many popular stances do not equate moral correctness with ‘what a person feels like’ so it should often be intelligible to check that one endorses the output of one’s moral feelings.)

This is all a complicated way of saying ‘people do bad because they copy other people who do bad’.

I think it is valuable to say it in the complicated way, because it helps with seeing what might be done differently. It also makes it clearer why things are not so bad—if people only ever copied other people, human morality would be random, which I think is false.

I could say more about why I believe these things, but I probably won’t unless anyone especially disagrees.


2. Should everyone use a different procedure instead?

I claim that while these procedures lead to terrible moral failings by otherwise nice people, they also lead to virtually all nice moral behavior by nice people. So I wouldn’t want to abandon them hastily.

Plus, the obvious alternative seems worse. I’d probably much rather live in a society largely comprised of sheep who follow others’ lead on moral issues than one where every individual reasoned about morality themselves from first principles—in whatever time they decided to allocate to the project—and then took their conclusions seriously.

But I expect that there are mild variations on the status quo that are improvements. If we look at how change usually happens, possibly we can direct it a bit.

3. How do morals change?

On the story here, moral views should be basically stable apart from gradual drift. They would change faster if people sometimes have anomalous moral feelings (i.e. those that don’t reflect the existing consensus around them), or if some people think about what is right independent of their own feelings.

For instance, a world that doesn’t care about animal welfare would likely remain so until enough people have strong empathy toward animals that causes them to feel bad about animal suffering in spite of popular indifference, or until some people think about whether they endorse animal suffering from some abstract standpoint (such as utilitarianism), and condemn it in spite of having few feelings about it. This sounds about right to me as key ways that moral change happens, but I don’t know a lot about the history of this so I could easily be wrong.

4. How could morals change more and better?

There are probably lots of things to say about this, but I’ll say some random ones that I thought of.

I said that society moves away from existing moral equilibria by people having anomalous feelings, or people deciding to think about what is right independent of feelings. So things are likely to change more both when more people do those things more, and when the people who do those things have an easier time affecting anything. For instance, an initially uncaring society is more likely to come to care about animal welfare if more of its members find themselves empathising with animals in spite of common norms, or if that minority is respected more, or at least has more ways to barrage people who disagree with them with videos that might change their feelings.

This says nothing about the direction of change however. It isn’t obvious whether more or less change is good, or whether there are many directions change happens in, or how many of them are good. And perhaps we can say something more specific about what kinds of feelings or independent moral thought helps?

5. Separating moral feelings and moral positions

My guess is that thinking about ethics instead of acting directly on ethical feelings is usually good. Even if you think ethical feelings are a good basis for decisions, thinking about ethics seems useful because it tends to take a bunch of feelings related to different situations as inputs, and look for consistent positions across a range of questions. My guess is that if there are some ethical views that you would endorse after much thought, this method gets more information about them out of your ethical feelings than acting on each ethical feeling in a one-off fashion does.

I might be failing to think of kinds of ethical thought  that people do. The ones I’m familiar with seem to focus on trying to come up with general principles that unite a bunch of moral feelings (including feelings about how morality should involve general principles and not depend on arbitrary things like spatial coordinates).

6. Having better moral feelings

There are probably lots of ways to go about getting unusual moral feelings. You can pick them up from other cultures, or make unusual conceptual associations, or take drugs, or have some sort of weird morally relevant synesthesia. So I wonder if you can disproportionately try to cause aberrant moral feelings that are useful for moral progress. My guess is yes, but before discussing that, let’s consider common ways moral feelings do change.

First, I wonder if anomalous feelings are often just from changing which group your moral feelings are trying to conform with. If you begin to think of foreigners or women or animals as being in your social sphere, and you imagine that they don’t approve of being treated badly in certain ways, then you come to think treating them badly is immoral just by the usual process of conforming with local moral consensus.

Another kind of moral feeling seems to come from generalizing moral feelings you already have. For instance, if you have a strong sense that pain is bad, and also a sense that it is ok to whip people as punishment, then you watch someone getting whipped and see that it involves pain, you probably end up with some conflicting feelings. And perhaps if you grew up away from people getting whipped, so you have unusually weak feelings about whether it should be allowed, your sense that causing pain is wrong might win out, where it didn’t for other people in your society. So that’s another way you might end up with unusual moral feelings.

I think there are a large class of cases like this where people have moral feelings about the badness of internal states like suffering or indignity, and moral feelings about it being ok to take certain external actions, but where the external actions cause the internal states for someone else. For instance, it might feel wrong for innocent people to live in destitution and danger, and also it might also feel right to be able to control who enters one’s country. And both of these might be prevalent views. Which feelings you end up having about the overall issue of refugee quotas is then not very determined. I think in situations like this people often have unusual feelings relative to people around them because they are in a slightly unusual position—for instance, one where refugees are unusually salient.

7. A specific suggestion for having better moral feelings

I propose that a good way to have novel and useful moral feelings is to try to experience the situations and feelings of the people involved in the relevant situation, in accurate proportions. For instance, if you are making decisions about animal welfare, I expect your feelings to be different to most people’s, and also to more accurately track the ethical views you would want to have, if you have interacted with distressed chickens, and happy chickens, and competing farm-owners, and people who do somewhat better on a meat-based diet, and have spent as much more time with the chickens than the farm owners as is appropriate to their scale.

Sometimes it is possible to experience the interests of one side much more strongly than the other. For instance, you might one day be able to see that a genetically modified person is well off, but it will be harder to really experience the badness of playing God. So the proposed heuristic for honing moral feelings might seem inherently utilitarian, in that it only accounts for the feelings of conscious entities. I don’t think that’s true though. You can still set out to experience the things that might most viscerally elicit the feeling of badness of playing God. I can’t actually think of anything that would make me feel conflicted about playing God in the relevant way, so maybe I should find out what makes someone else feel bad about it, at least before I play God. My guess is that there are situations that will make me feel more uneasy about playing God, and I’m suggesting that I will have better moral feelings in expectation if I try to actually viscerally experience those.


3 responses to “Moral progress enhancement

  1. Location is not random. It’s strongly correlated with LOTS of relevant considerations.

    It’s not that this proposed method is biased Utilitarian that’s a problem, but that it’s biased act Utilitarian despite Act Utilitarianism being generally recognised as false (more-so, in fact, than the closely related Causal Decision Theory).

    The ‘Accurate Proportions’ clause doesn’t appear to me to be actionable. In particular, it’s hard to attend in proportion to spacially and especially temporally distant moral objects.

    Lots more to say here. Important topic.
    Here’s a hypothesis that I’m pretty confident of.

    A) people have innate moral intuitions drawing from empathy (caring) and other ‘moral emotions’ enumerated and unenumerated.
    B) people have drives to conform, and to force ingroup members to conform, and to interpret non-conformity as a status claim. My hey have drives to put down those who make status claims without possessing sufficient status.
    C) these conformist drives largely but not entirely overshadow innate moral intuitions is most cases. Neurochemical Disruptions, extreme status validation, and probably some other phenomena can reduce this overshadowing. Very strong drives, especially strong disgust, are fairly resistant to conformity. Empathy and concern for close relatives, especially if they are young and/or are you children, is highly resistant.
    D) status claims are more effective, both conferring more status and suffering less repression, when they serve as precipitation nuclei for political coalitions, which is easier if the status claims are non-random, taking advantage of pre-existing structure, such as innate moral feelings
    E) status claims are more effective when they look like self-imposed handicaps.

    A tentative conclusion from this is that moral progress arises largely from elites deviating from consensus towards innate moral feelings in ways that are easy for other elites to form a coalition around and which appear to handicap the elites in their general pursuit of status more than they actually do. For instance, commercial elites win status and allies by deviating from a slave-holding consensus in a manner which appears to constitute a large financial/power sacrifice, but which actually does not constitute such a sacrifice because commerce is a more efficient and sustainable way to have wealth and power anyway.

  2. I wonder to what extent the core idea of moral progress vs a random walk is really so obvious. It seems there is some inherent Cartesian-esque priviliging of something called consciousness that happens to be a core ideology of the winners of history (so far)

  3. I think you’re missing two ways for morals to change, both of which seem more important than the ones you identify. You aren’t accounting for changes in incentives or for environmental effects on personality.

    “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Michael Vassar gave an excellent example of incentive changes causing moral progress: it’s much easier to believe that slavery is excellent when you own a cotton plantation, and much easier to realise that it’s abominable when people invent many new ways to be wealthy that don’t involve land-owning.

    The options available to a culture have a tremendous effect on how it’s structured. I think this is the single biggest factor in moral progress and I think it’s probably much bigger than any other. Societies with similar problems and similar technologies often find similar solutions. For example, many peoples adopt social structures similar to Pashtunwali because it’s good at handling “extreme fluctuations in the level of resources and intense competition for them”. [1] This isn’t just a modern thing: I think you can see the same patterns in the hill tribes of Biblical Judea. As other examples, mass slavery and god-kings seem to be pretty effective responses to Bronze Age technology, while feudalism seems to be very effective when communication is slow and heavy cavalry is supreme, and reliable contraception and STD treatments have drastically changed sexual mores. You can also get changes from new problems instead of new technologies: for example, liberalism is an excellent way to prevent wars of religion.

    I see two main clusters of “environmental effects on personality”. One is things like lead poisoning, iodine deficiency, disease incidence and other gross physical effects. But there are also subtler effects, such as from general resource abundance or scarcity (particularly during childhood), livelihood (being a merchant and being a farmer select for different traits), and level of social trust. The last one seems particularly important: the damage done by slaving parties seems to still be hurting the economies of African countries that were raided for the transatlantic slave trade.



Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.