Probabilistic self-defeat arguments

Alvin Plantinga‘s ‘evolutionary argument against naturalism(EAAN) goes like this:

  1. If humans were created by natural selection, and also not under the guidance of a creator (‘naturalism’), then (for various reasons he gives) the probability that their beliefs are accurate is low.
  2. Therefore believing in natural selection and naturalism should lead a reasonable person to abandon these beliefs (among others). i.e. belief in naturalism and natural selection is self defeating.
  3. Therefore a reasonable person should not believe in naturalism and natural selection.
  4. Naturalism is generally taken to imply natural selection, so a reasonable person should not believe in naturalism.

This has been attacked from many directions. I agree with others that what I have called point 1 is dubious. However even on accepting it, it seems to me the argument fails.

Let us break down the space of possibilities under consideration into:

  • A: N&T: naturalism and true beliefs
  • B: N&F: naturalism and false beliefs
  • C: G&T: God and true beliefs

The EAAN says conditional on N, B is more likely than A, and infers from this that one cannot believe in either A or B, since both are included in N.

But there is no obvious reason to lump A and B together. Why not lump B and C together? Suppose we believe ‘natural selection has not produced true beliefs in us’. Then either natural selection has produced false beliefs, or God has produced true beliefs. If we don’t assign very high credence to the latter relative to the former, then we have a version of the EAAN that contradicts its earlier incarnation: ‘natural selection has not produced true beliefs in us’ is self-defeating. So we must believe that natural selection has produced true beliefs in us*.

What if we do assign a very high credence to C over B? It seems we can just break C up into smaller parts, and defeat them one at a time. B is more likely than C&D, where D = “I roll 1 on my n sided dice”, for some value of n. So consider the belief “B or C&D”. This is self-defeating. As it would seem “B or C&E” is, where E is “I roll 2 on my n sided dice”. And so on.

By this reasoning, if there is any possible world that is self-defeating, pretty much any other possible world can be sucked into the defeat. This depends a bit on the details about how unlikely reliable beliefs must be for belief in that situation to be self-defeating, and how the space can be broken up. But generally, this reasoning allows the self-defeatingness of any state of affairs to contaminate any other state of affairs that can be placed in disjunction with it, and that can be broken into states of affairs not much more probable than it.

It seems to me that any reasoning with this property must be faulty. So I suggest probabilistic self-defeat arguments of this form can’t work in general.

It could be that Plantinga means to make a stronger argument, for instance ‘there is no set of beliefs consistent with naturalism under which one’s beliefs have high probability’, but this seems like quite a hard argument to make. I could place a high probability on A for instance.

It could also be that Plantinga means to use further assumptions that make a distinction between grouping A and B together and grouping B and C together. One possibility is that it is important that N is a cause of T or F, but this seems both ad-hoc and possible to get around. At any rate, Plantinga doesn’t seem to articulate further assumptions in the account of his argument that I read, so his argument seems unlikely to be correct as it stands, all other criticisms aside.

*Note that if you wanted to turn the argument against creationism, it seems you could also just expand the space to include creators who don’t produce true beliefs, and  depending on probabilities, use this to defeat the belief in a creator, including one who does produce true beliefs.

The future of values 2: explicit vs. implicit

Relatively minor technological change can move the balance of power between values that already fight within each human. Beeminder empowers a person’s explicit, considered values over their visceral urges, in much the same way that the development of better sling shots empowers one tribe over another.

In the arms race of slingshots, the other tribe may soon develop their own weaponry. In the spontaneous urges vs. explicit values conflict though, I think technology should generally tend to push in one direction. I’m not completely sure which direction that is however.

At first glance, it seems to me that explicit values will tend to have a much better weapons research program. This is because they have the ear of explicit reasoning, which is fairly central to conscious research efforts. It seems hard to intentionally optimize something without admitting at some point in the process that you want it.

When I want to better achieve my explicit goal of eating healthy and cheap food for instance, I can sit down and come up with novel ways to achieve this. Sometimes such schemes even involving trickery of the parts of myself that don’t agree with this goal, so divorced they are from this process. When I want to fulfill my urge to eat cookie dough on the other hand, I less commonly deal with this by strategizing to make cookie dough easier to eat in the future, or to trick other parts of myself into thinking eating cookie dough is a prudent plan.

However this is probably at least partly due to the cookie dough eating values being shortsighted. I’m having trouble thinking of longer term values I have that aren’t explicit on which to test this theory, or at least having trouble admitting to them. This is not very surprising; if they are not explicit, presumably I’m either unaware of them or don’t endorse them.

This model in which explicit values win out could be doubted for other reasons. Perhaps it’s pretty easy to determine unconsciously that you want to live in another suburb because someone you like lives there, and then after you have justified it by saying it will be good for your commute, then all the logistics that you need to be conscious for can still be carried out. In this case it’s easy to almost-optimize something consciously without admitting that you want it. Maybe most cases are like this.

Also note that this model seems to be in conflict with the model of human reasoning as basically involving implicit urges followed up by rationalization. And sometimes at least, my explicit reasoning does seem to find innovative ways to fulfill my spontaneous urges. For instance, it suggests that if I do some more work, then I should be able to eat some cookie dough. One might frame this as conscious reasoning merely manipulating laziness and gluttony to get a better deal for my explicit values. But then rationalization would say that. I think this is ambiguous in practice.

Robin Hanson responds to my question by saying there are not even two sets of values here to conflict, but rather one which sometimes pretends to be another. I think it’s not obvious how that is different, if pretending involves a lot of carrying out what an agent with those values would do.

An important consideration is that a lot of innovation is done by people other than those using it. Even if explicit reasoning helps a lot with innovation, other people’s explicit reasoning may side with your inchoate hankerings. So a big question is whether it’s easier to sell weaponry to implicit or explicit values. On this I’m not sure. Self-improvement products seem relatively popular, and to be sold directly to people more often than any kind of products explicitly designed to e.g. weaken willpower. However products that weaken willpower without an explicit mandate are perhaps more common. Also much R&D for helping people reduce their self-control is sponsored by other organizations, e.g. sellers of sugar in various guises, and never actually sold directly to the customer (they just get the sugar).

I’d weakly guess that explicit values will win the war. I expect future people to have better self-control, and do more what they say they want to do. However this is partly because of other distinctions that implicit and explicit values tend to go along with; e.g. farsighted vs. not. It doesn’t seem that implausible that implicit urges really wear the pants in directing innovation.

The future of values

Humans of today control everything. They can decide who gets born and what gets built. So you might think that they would basically get to decide the future. Nevertheless, there are some reasons to doubt this. In one way or another, resources threaten to escape our hands and land in the laps of others, fueling projects we don’t condone, in aid of values we don’t care for.

A big source of such concern is robots. The problem of getting unsupervised strangers to to carry out one’s will, rather than carrying out something almost but quite like one’s will, has eternally plagued everyone with a cent to tempt such a stranger with. There are reasons to suppose the advent of increasingly autonomous robots with potentially arbitrary goals and psychological tendencies will not improve this problem.

If we avoid being immediately trodden on by a suddenly super-superhuman AI with accidentally alien values, you might still expect a vast new labor class of diligent geniuses with exotic priorities would snatch a bit of influence here and there, and eventually do something you didn’t want with the future we employed them to help out with.

The best scenario for human values surviving far into an era artificial intelligences may be the brain emulation scenario. Here the robot minds start out as close replicas of human minds, naturally with the same values. But this seems bound to be short-lived. It would likely be a competitive world, with strong selection pressures. There would be the motivation and technology to muck around with the minds of existing emulations to produce more useful minds. Many changes that would make a person more useful for another person might involve altering that person’s values.

Regardless of robots, it seems humans will have more scope to change humans’ values in the future. Genetic technologies, drugs, and even simple behavioral hacks could alter values. In general, we understand ourselves better over time, and better understanding yields better control. At first it may seem that more control over the values of humans should cause values to stay more fixed. Designer babies could fall much closer to the tree than children traditionally have, so we might hope to pass our wealth and influence along to a more agreeable next generation.

However even if parents could choose their children to perfectly match their own values, selection effects would determine who had how many children – somewhat more strongly than they can now – and humanity’s values would drift over the years. If parents also choose based on other criteria – if they decide that their children could do without their own soft spot for fudge, and would benefit from a stronger work ethic – then values could change very fast. Or genetic engineering may just produce shifts in values as a byproduct. In the past we have had a safety net because every generation is basically the same genetically, and so we can’t erode what is fundamentally human about ourselves. But this could be unravelled.

Even if individual humans maintain the same values, you might expect innovations in institution design to shift the balance of power between them. For instance, what was once an even fight between selfishness and altruism within you could easily be tipped by the rest of the world making things easier for the side of altruism (as they might like to do, if they were either selfish or altruistic).

Even if you have very conservative expectations about the future, you probably face qualitatively similar changes. If things continue exactly as they have for the last thousands of years, your distant descendants’ values will be as strange to you as yours are to your own distant ancestors.

In sum, there is a general problem with the future: we seem likely lose control of a lot of it. And while in principle some technology seems like it should help with this problem, and it could also create an even tougher challenge.

These concerns have often been voiced, and seem plausible to me. But I summarize them mainly because I wanted to ask another question: what kinds of values are likely to lose influence in the future, and what kinds are likely to gain it? (Selfish values? Far mode values? Long term values? Biologically determined values?)

I expect there are many general predictions you could make about this. And as as a critical input into what the future looks like, future values seem like an excellent thing to make predictions about. I have predictions of my own; but before I tell you mine, what are yours?

Which stage of effectiveness matters most?

Many altruistic endeavors seem overwhelmingly likely to be ineffective compared to what is possible. For instance building schoolsfunding expensive AIDS treatment, and raising awareness about breast cancer and low status.

For many other endeavors, it is possible to tell a story under which they are massively important, and hard to conclusively show that we don’t live in that story. Yet it is also hard to make a very strong case that they are better than a huge number of other activities. For instance, changing policy discourse in China, averting rainforest deforestation or pushing for US immigration reform.

There are also (at least in theory) endeavors that can be reasonably expected to be much better than anything else available. Given current disagreement over what fits in this category, it seems to either be empty at the moment, or highly dependent on values.

An important question for those interested in effective altruism is whether most of the gains from effectiveness are to come from people who support the obviously ineffective endeavors moving to plausibly effective ones, or from people who support the plausibly effective endeavors moving to the very probably effective ones.

One reason this matters is that the first jump requires hardly any new research about actual endeavors, while the second seems to require a lot of it. Another is that the first plan involves engaging quite a different demographic to the second, and probably in a different way. Finally, the second plan requires intellectual standards that can actually filter out the plausible endeavors from the very good ones. Such standards seem hard to develop and maintain. Upholding norms that filter terrible interventions from plausible ones is plenty of work, and probably easier.

My own intuition has been that most of the value will come from the second possibility. However I suspect others have the opposite feeling, or at least aim to exploit the first possibility more at the moment. What do you think? Is the distinction even just?

Why would evolution favor more bad?

Sometimes people argue that pain and suffering should be expected to overwhelm the world because bad experiences are ‘stronger’ in some sense than good ones. People generally wouldn’t take five minutes of the worst suffering they have ever had for five minutes of the best pleasure (or so I’m told). An evolutionary explanation sometimes given is that the things that happen to animals tend to be mildly beneficial for them most of the time, then occasionally very bad. For instance, eating food is a bit good, but one meal won’t guarantee you evolutionary success. If on the other hand someone else eats you, you have lost pretty badly.

This seems intuitively plausible. Many processes have the characteristic that you can add more bricks and gradually reach your goal, but taking away a brick causes the whole thing to crumble. However good and bad outcomes are relative. If you see a snake and are deciding whether to go near it or not, there is a worse outcome of it biting you, and a better outcome of it not biting you. The good outcome here is super valuable, even if it doesn’t buy you immediate evolutionary success. It is just as important for you to get the good outcome as for you to not get the bad outcome. So what exactly do we mean by bad outcomes being worse than good outcomes are good? It seems we are judging outcomes relative to some default. So we need an explanation for why the default is where it is.

I think the most obvious guess is that the default is something like expectations, or ‘business as usual’. If you generally expect to go through your morning not being killed, then the avoiding being bitten by the snake option is close to neutral, whereas the being bitten option is very bad. But if the default is expectations, then the expected badness and the expected goodness should roughly cancel out – if suffering just tends to be stronger, then it should also tend to be rare enough to cancel. So on this model you shouldn’t expect life to be net bad especially.

At least on this model the badness and goodness should have cancelled out in the evolutionary environment. Our responses to good and bad situations don’t seem to change with our own expectations that much – even if you have been planning to go to the dentist for months, and it isn’t as bad as you thought, it can still be pretty traumatic. So you might think the default is fairly stable, and after we have been pushed far from our evolutionary environment, joy and suffering could be out of balance. Since we have been the ones pushing ourselves from the evolutionary environment, you might think we have been pushed basically in the direction of things we like (living longer, avoiding illness and harsh physical conditions, minimizing hard labor). So you might expect it is out of balance in the direction of more joy.

This story has some gaps. Why would we experience positive and negative emotions relative to rough expectations? Is the issue really expectations, or just something that looks a bit like that? To answer these questions one would seem to need a much better understanding of the functions of emotional reactions than I have. For now though, a picture where positive and negative emotions were roughly equal in some sense at some point seems plausible, and on that picture, I expect they are now net positive for humans, and roughly neutral for animals (by that same measure). This contribute to my lack of concern for both wild animal suffering, and the possibility that human lives are broadly not worth living.

There are many further issues unresolved. The notion that pleasure and pain should be roughly balanced for some reason is given much of its intuitive support by the observation that they are close enough that which is greater seems somewhat controversial. But perhaps net pleasure and pain only seem to be broadly comparable because humans are bad at comparing things, especially nebulous things. It is not uncommon to be both unclear on whether to go to school A or school B, and also unclear on whether you should go to school A with $10,000 or school B. Another issue is whether the measure by which there were similar amounts of pleasure and suffering actually align with your values. Perhaps positive and negative emotions use similar amounts of total mental energy, but mental energy translates to experiences you like more efficiently than to ones you don’t. Another concern is whether animals in general should be in such an equilibrium, or whether perhaps only animals that survive should, and all the offspring produced that die immediately don’t come into the calculus and can just suffer wantonly.

I think it is hard to give a conclusive account of this issue at the moment, but as it stands I don’t see how evolutionary considerations suggest we should expect bad feelings to dominate.