Category Archives: 1

Vote on values

I. A problem

I have heard pessimism lately about whether democracy can produce good decisions frequently enough to stop everything rapidly going to Hell. Primary concerns are that voters are ignorant and that voters are evil. Supposing voters are evil, arguably any good system of government should bring about Hell. However the ignorance seems like a real issue.

A popular response to all complaints about democracy is, ‘Well, what else are we going to do? Do you want dictatorship?’

I think this ignores the potential for mild variations on democracy. ‘Democracy’ is not very specific. Wikipedia lists a bunch of variations. I’d like to suggest a different one.

The basic problem I want to solve is that the people voting for policies (directly or indirectly) are ignorant about the likely consequences of policies.

But first I’d like to point out that this is a problem for everyone, not a conflict between an ignorant team and an informed team. That ignorant people vote for destructive policies is at least as bad for the ignorant people as it is for everyone else. That is, if people truly vote for bad policies due to ignorance, they would presumably prefer the outcomes that they voted against. 

II. A (hand-wavy) solution

My proposal is for people to vote on what they want to happen, and then for someone else to put in the hard work of figuring out which policies correspond to which outcomes. That is, to vote on values.

Robin Hanson suggested this in Shall we Vote on Values, but Bet on Beliefs? (2013), as a component of Futarchy—a system where people elect representatives to stipulate their values, and use prediction markets to judge which policies will satisfy those values. Robin is mostly excited about the prediction markets aspect, but I think the idea of separating out values from policies is important on its own. Prediction markets are but one thing a population might use to figure out what to do, once they knew what their group as a whole wanted. Arguably a pretty good thing, but still. Any kind of voting on values then doing something else about beliefs seems like it would have a number of benefits unrelated to prediction markets.

We can think of selecting policy as something like:

values beliefs policies1

Everyone has their own values and empirical beliefs. Everyone has to share policies. Values and empirical beliefs together determine the best policies. We basically want everyone’s values to be represented in the policies. It is not important that everyone’s empirical beliefs are all represented though—if we had a good way of just using the most accurate beliefs to bring about everyone’s values, the people with the least accurate beliefs would still be better off.

Usually the combining of values and empirical beliefs into policy recommendations happens within each person’s head. Then we aggregate policies, via voting on them directly or voting for representatives who agree with us on policy. Instead, we could aggregate values alone, and combine the aggregated values with empirical data gleaned some other way.

III. Good things

I claim this would achieve the good things about democracy—e.g. accounting for everyone’s interests, fairness, avoiding extreme evils, reducing reasons for conflict—at least not much worse than the current system, while mostly mitigating the problem that most people are ignorant or misinformed about most things.

I think there are also a lot of other benefits. Here are benefits of voting on values that I can think of:

  • More accurate sources of empirical belief. There are lots of better ways to get accurate empirical views than taking a national vote. The problem is often summarized as ‘people are stupid’ and ‘people are uneducated’, but even smart, educated people are probably very ignorant about the policies they vote on a lot of the time, relative to experts. It would just be an infeasibly huge amount of work to have informed views about the myriad policy questions a person has some tiny amount of political influence over.
  • Much less effort. Instead of every person in the country figuring out which policies lead to which outcomes (a very tricky problem), it only has to be done once. 
  • More efficient use of information. If everyone’s beliefs constitute noisy evidence about the true state of the world, and each person uses only their own beliefs to choose their favorite policy, most information that could be used for each choice about policy preference is not being used. If beliefs are aggregated in some way and and then applied to aggregated values, this uses all of the information.
  • More fairness to those with few resources. The status quo means that uneducated people are less likely to get the outcomes they want, because they are more likely to vote for policies that don’t support those outcomes, due to misunderstanding. This proposal should avoid that bias.
  • Less destruction from voting to express values. Arguably, most of the consequences of a person’s political positions are on friends’ and acquaintances’ perceptions of the person. So we might expect political choices to be partly optimized for signaling values and qualities, rather than for optimal policy consequences. If people voted on values rather than policies, this would superficially seem to make advertising your values and qualities more straightforward, and less destructive, because expressing your values is just what you are supposed to be doing. 

Several of these seem pretty big.

IV. Tricky things

There are also several obvious difficulties. A first difficulty is converting values into policies without the interference of the values of those people involved in doing the conversion. To put it less abstractly, if my nation decides that it values jobs a certain amount, and I am in charge of figuring out how to best create some jobs, and I don’t like people having jobs in forestry, you have to somehow stop me from just lying about whether forestry is a good place for creating jobs. 

While this seems hard to prevent entirely, there is already a lot of indirection between what people vote on and what happens in our current system. And probably this already biases outcomes far in favor of what intermediaries want. So the bar for improvement is not very high. I expect we could make a system of voting on values that was better than the current system in this regard. 

Another difficulty is that there isn’t a clearly good format for values to take while they are being voted on. Do you tick a box next to ‘people should be richer’? Probably not. Your vote would need to indicate how much some values are worth relative to others, and there are just a lot of things to value, and they don’t come in convenient units. Robin’s paper proposes a solution involving representatives, which at least demonstrates that this can be solved. I expect there are other ways to do it.

A related difficulty is deciding what kinds of things can be values. If you are going to aggregate everyone’s values, they will probably need to be in some common and easy to vote on format, which will probably restrict expressiveness. Again the bar for improving on the current system is not high however—choosing between two representatives whose level of agreement with your policy preferences is mostly explained by your being of the same species as them also probably reduces expressiveness.

There are probably also heaps of other problems that I haven’t thought of now. I’m mostly suggesting that this is worth thinking about, rather than presenting a detailed proposal that doesn’t have terrible problems.

V. Alternatives (that are worse)

Sometimes people suggest that citizens just not be allowed to vote unless they meet certain intellectual standards, such as basic knowledge of the part of the world they are voting about. This would have the dreadful downside that the people who know less about the broader world—perhaps because they don’t have the resources to invest in reading about such things—have their interests completely ignored. Yet people find such proposals perennially appealing. I think voting on values is the natural resolution to this conflict between wanting to represent the interests of people who are not deeply educated about all policy-relevant aspects of the world, without absorbing their empirical misunderstandings.

Evidence on why abstract research is or isn’t respected

I previously suggested an explanation for very abstract research sometimes not being well respected: very abstract thought often looks superficially similar to very basic confusion, which looks amusingly silly. For instance, thinking about paraconsistent logic looks a lot like being confused about whether yes means no.

This theory suggests that abstract thought would mostly be less respected in areas that people have common sense views, because common sense is where it looks especially silly to be confused about basic assumptions.

I think this describes the abstract topics that Robin Hanson is interested in—and originally asked about—pretty well: the future, the human mind, the economy, practically relevant philosophy, and human behavior.

Maybe it’s just true of all areas? I don’t think so— biology and chemistry probably don’t have so many common sense views I think. Though physics and engineering probably have some, due to people having intuitive physics models.

So I think this is some evidence for the earlier theory, but I still don’t believe it that much.

Effective hypocrisy?

You know what is cheap? Talk. 

You know what is expensive? Action. 

You know what is cost-effective? Hypocrisy.

At least if non-word actions are not much more effective than words, which seems right. Differences to the world you can make without communicating seem limited. And for communication, words seem better. Maybe actions speak louder than words, when they speak. But words do most of the talking, because actions are private and not very intelligible. They are like a really loud mumble to oneself. Words are so much easier to hear that when you know about someone else’s actions, it’s virtually always just because you heard some words about them.

Does Effective Altruism fundamentally push toward hypocrisy? 

Mistakes #5: Blinded by winning

(Mistakes #1, #2, #3, #4)

I used to be a practicing atheist. I figured I had strong arguments against God’s existence. I talked to some Christians, and found that they were both ill-prepared to defend their views and shockingly uninterested in the fact that they couldn’t. This made them look like the epistemological analogues of movie villains; trivial to scorn.

Alas, this made me less likely to wonder if I was mistaken about the whole topic. If a person responds to criticisms of their beliefs with fluster and fascination with all other subjects, my natural  response is not to back down and think about why I am wrong.

Yet I should have been confused. If a person is apparently doing a host of things because of fact X, and the balance of evidence doesn’t seem to support X, and the person don’t appear to care about that, one should probably question one’s assumption that X is a central part of their worldview. I still think I wasn’t wrong about X, but I was probably wrong about all these people toiling under peculiar and willingly misinformed views on X.

Thinking about it now, it seems unlikely that the existence and exact definition of God is anywhere near as central to religion as it seems to a literal-minded systematization-obsessed teenager with little religious experience. Probably religious people mostly believe in God, but it’s not like they came to that conclusion and then reluctantly accepted the implications of it. It’s part of a big cluster of intersecting things that are appealing for various reasons. I won’t go into this, because I don’t know much about it, and this post isn’t about what religion is about. (If you want a post that is about that, at least a bit, Scott Alexander wrote two good ones recently that seem about right to me.)

This post is about winning arguments. If you repeatedly win an argument too easily, I claim that you should be less sure that you know what is going on at all, rather than smug. My boyfriend points out that being perturbed by the weakness of your opponents’ arguments is perhaps the smuggest way to be unsure of yourself, so maybe I just think you should be less sure of yourself as well as smug.

Moral progress enhancement

[Epistemic status: speculation]

If moral progress is so important, probably we should try to improve it.

1. Why have ordinary people been immoral en masse?

From a previous post:

I would like to think I wouldn’t have been friends with slave owners, anti-semites or wife-beaters, but then again most of my friends couldn’t give a damn about the suffering of animals, so I guess I would have been. – Robert Wiblin

I expect the same friends would have been any of those things too, given the right place and period of history. The same ‘faults’ appear to be responsible for most old fashioned or foreign moral failings: not believing that anything bad is happening if you don’t feel bad about it, and not feeling bad about anything unless there is a social norm of feeling bad about it.

That is, I claim that the procedure individuals use for morality has these key components:

  1. Conformist moral affect: people have moral feelings, and these mostly reflect what their peers deem right or wrong.
  2. Dictatorship of moral affect: moral feelings directly determine what people endorse.

So for instance if everyone around tortures puppies, most people consequently feel ok about puppy torture. And then if you feel ok torturing puppies, you assume that you are in fact ok with it, rather than for instance doing an extra step of conscious deliberation to check this.

(You might wonder what you would be consciously deliberating here: I’m not taking a stance on ethics or meta-ethics, but I think many popular stances do not equate moral correctness with ‘what a person feels like’ so it should often be intelligible to check that one endorses the output of one’s moral feelings.)

This is all a complicated way of saying ‘people do bad because they copy other people who do bad’.

I think it is valuable to say it in the complicated way, because it helps with seeing what might be done differently. It also makes it clearer why things are not so bad—if people only ever copied other people, human morality would be random, which I think is false.

I could say more about why I believe these things, but I probably won’t unless anyone especially disagrees.

 

2. Should everyone use a different procedure instead?

I claim that while these procedures lead to terrible moral failings by otherwise nice people, they also lead to virtually all nice moral behavior by nice people. So I wouldn’t want to abandon them hastily.

Plus, the obvious alternative seems worse. I’d probably much rather live in a society largely comprised of sheep who follow others’ lead on moral issues than one where every individual reasoned about morality themselves from first principles—in whatever time they decided to allocate to the project—and then took their conclusions seriously.

But I expect that there are mild variations on the status quo that are improvements. If we look at how change usually happens, possibly we can direct it a bit.

3. How do morals change?

On the story here, moral views should be basically stable apart from gradual drift. They would change faster if people sometimes have anomalous moral feelings (i.e. those that don’t reflect the existing consensus around them), or if some people think about what is right independent of their own feelings.

For instance, a world that doesn’t care about animal welfare would likely remain so until enough people have strong empathy toward animals that causes them to feel bad about animal suffering in spite of popular indifference, or until some people think about whether they endorse animal suffering from some abstract standpoint (such as utilitarianism), and condemn it in spite of having few feelings about it. This sounds about right to me as key ways that moral change happens, but I don’t know a lot about the history of this so I could easily be wrong.

4. How could morals change more and better?

There are probably lots of things to say about this, but I’ll say some random ones that I thought of.

I said that society moves away from existing moral equilibria by people having anomalous feelings, or people deciding to think about what is right independent of feelings. So things are likely to change more both when more people do those things more, and when the people who do those things have an easier time affecting anything. For instance, an initially uncaring society is more likely to come to care about animal welfare if more of its members find themselves empathising with animals in spite of common norms, or if that minority is respected more, or at least has more ways to barrage people who disagree with them with videos that might change their feelings.

This says nothing about the direction of change however. It isn’t obvious whether more or less change is good, or whether there are many directions change happens in, or how many of them are good. And perhaps we can say something more specific about what kinds of feelings or independent moral thought helps?

5. Separating moral feelings and moral positions

My guess is that thinking about ethics instead of acting directly on ethical feelings is usually good. Even if you think ethical feelings are a good basis for decisions, thinking about ethics seems useful because it tends to take a bunch of feelings related to different situations as inputs, and look for consistent positions across a range of questions. My guess is that if there are some ethical views that you would endorse after much thought, this method gets more information about them out of your ethical feelings than acting on each ethical feeling in a one-off fashion does.

I might be failing to think of kinds of ethical thought  that people do. The ones I’m familiar with seem to focus on trying to come up with general principles that unite a bunch of moral feelings (including feelings about how morality should involve general principles and not depend on arbitrary things like spatial coordinates).

6. Having better moral feelings

There are probably lots of ways to go about getting unusual moral feelings. You can pick them up from other cultures, or make unusual conceptual associations, or take drugs, or have some sort of weird morally relevant synesthesia. So I wonder if you can disproportionately try to cause aberrant moral feelings that are useful for moral progress. My guess is yes, but before discussing that, let’s consider common ways moral feelings do change.

First, I wonder if anomalous feelings are often just from changing which group your moral feelings are trying to conform with. If you begin to think of foreigners or women or animals as being in your social sphere, and you imagine that they don’t approve of being treated badly in certain ways, then you come to think treating them badly is immoral just by the usual process of conforming with local moral consensus.

Another kind of moral feeling seems to come from generalizing moral feelings you already have. For instance, if you have a strong sense that pain is bad, and also a sense that it is ok to whip people as punishment, then you watch someone getting whipped and see that it involves pain, you probably end up with some conflicting feelings. And perhaps if you grew up away from people getting whipped, so you have unusually weak feelings about whether it should be allowed, your sense that causing pain is wrong might win out, where it didn’t for other people in your society. So that’s another way you might end up with unusual moral feelings.

I think there are a large class of cases like this where people have moral feelings about the badness of internal states like suffering or indignity, and moral feelings about it being ok to take certain external actions, but where the external actions cause the internal states for someone else. For instance, it might feel wrong for innocent people to live in destitution and danger, and also it might also feel right to be able to control who enters one’s country. And both of these might be prevalent views. Which feelings you end up having about the overall issue of refugee quotas is then not very determined. I think in situations like this people often have unusual feelings relative to people around them because they are in a slightly unusual position—for instance, one where refugees are unusually salient.

7. A specific suggestion for having better moral feelings

I propose that a good way to have novel and useful moral feelings is to try to experience the situations and feelings of the people involved in the relevant situation, in accurate proportions. For instance, if you are making decisions about animal welfare, I expect your feelings to be different to most people’s, and also to more accurately track the ethical views you would want to have, if you have interacted with distressed chickens, and happy chickens, and competing farm-owners, and people who do somewhat better on a meat-based diet, and have spent as much more time with the chickens than the farm owners as is appropriate to their scale.

Sometimes it is possible to experience the interests of one side much more strongly than the other. For instance, you might one day be able to see that a genetically modified person is well off, but it will be harder to really experience the badness of playing God. So the proposed heuristic for honing moral feelings might seem inherently utilitarian, in that it only accounts for the feelings of conscious entities. I don’t think that’s true though. You can still set out to experience the things that might most viscerally elicit the feeling of badness of playing God. I can’t actually think of anything that would make me feel conflicted about playing God in the relevant way, so maybe I should find out what makes someone else feel bad about it, at least before I play God. My guess is that there are situations that will make me feel more uneasy about playing God, and I’m suggesting that I will have better moral feelings in expectation if I try to actually viscerally experience those.