Tag Archives: philosophy

Suspicious arguments regarding cow counting

People sometimes think that the doomsday argument is implausible because it always says we are more likely to die out sooner than our other reasoning suggests, regardless of the situation. There’s something dubious about an argument that has the same conclusion about the world regardless of any evidence about it. Nick Bostrom paraphrases, “But isn’t the probability that I will have any given rank always lower the more persons there will have been? I must be unusual in some respects, and any particular rank number would be highly improbable; but surely that cannot be used as an argument to show that there are probably only a few persons?” (he does not agree with this view).

That this reasoning is wrong is no new insight. Nick explains for instance that in any given comparison of different length futures, the doomsday reasoning doesn’t always give you the same outcome. You might have learned that your birth rank ruled out the shorter future. It remains the case though that the shift from whatever you currently believe to what the doomsday argument tells you to believe is always one toward shorter futures. I think it is this that seems fishy to people.

I maintain that the argument’s predictable conclusion is not a problem at all, and I would like to make this vivid.

Once a farmer owned a group of cows. He would diligently count them, to ensure none had escaped, and discover if there were any new calves. He would count them by lining them up and running his tape measure along the edge of the line.

“One thousand cows” he exclaimed one day. “Fifty new calves!”

His neighbour heard him from a nearby field, and asked what he was talking about. The farmer held out his tape measure. The incredulous neighbour explained that since cows are more than an inch long, his figures would need some recalculation. Since his cows were about five foot long on average, the neighbour guessed he would need to divide his number by 60. But the farmer quickly saw that this argument must be bogus. If his neighbour was right, whatever number of cows he had the argument would say he had fewer. What kind of argument would that be?

A similar one to the Doomsday Argument’s claim that the future should always be shorter than we otherwise think. In such cases the claim is that your usual method of dealing with evidence is biased, not that there is some particular uncommon evidence that you didn’t know about.

Similarly, the Self Indication Assumption‘s ‘bias’ toward larger worlds is taken as reason against it. Yet it is just a claim that our usual method is biased toward small worlds.

Is it obvious that pain is very important?

“Never, for any reason on earth, could you wish for an increase of pain. Of pain you could wish only one thing: that it should stop. Nothing in the world was so bad as physical pain. In the face of pain there are no heroes, no heroes […].  –George Orwell, 1984 via Brian Tomasik , who seems to agree that just considering pain should be enough to tell you that it’s very important.

It seems quite a few people I know consider pain to have some kind of special status of badness, and that preventing it is thus much more important than I think it. I wouldn’t object, except that they apply this in their ethics, rather than just their preferences regarding themselves. For instance arguing that other people shouldn’t have children, because of the possibility of those children suffering pain. I think pain is less important to most people relative to their other values than such negative utilitarians and similar folk believe.

One such argument for the extreme importance of pain is something like ‘it’s obvious’. When you are in a lot of pain, nothing seems more important than stopping that pain. Hell, even when you are in a small amount of pain, mitigating it seems a high priority. When you are looking at something in extreme pain, nothing seems more important than stopping that pain. So pain is just obviously the most important bad thing there is. The feeling of wanting a boat and not having one just can’t compare to pain. The goodness of lying down at the end of a busy day is nothing next to the badness of even relatively small pains.

I hope I do this argument justice, as I don’t have a proper written example of it at hand.

An immediate counter is that when we are not in pain, or directly looking at things in pain, pain doesn’t seem so important. For instance, though many people in the thralls of a hangover consider it to be pretty bad, they are repeatedly willing to trade half a day of hangover for an evening of drunkenness. ‘Ah’, you may say, ‘that’s just evidence that life is bad – so bad that they are desperate to relieve themselves from the torment of their sober existences! So desperate that they can’t think of tomorrow!’. But people have been known to plan drinking events, and even to be in quite good spirits in anticipation of the whole thing.

It is implicit in the argument from ‘pain seems really bad close up’ that pain does not seem so bad from a distance. How then to know whether your near or far assessment is better?

You could say that up close is more accurate, because everything is more accurate with more detail. Yet since this is a comparison between different values, being up close to one relative to others should actually bias the judgement.

Perhaps up close is more accurate because at a distance we do our best not to think about pain, because it is the worst thing there is.

If you are like many people, when you are eating potato chips, you really want to eat more potato chips. Concern for your health, your figure, your experience of nausea all pale into nothing when faced with your drive to eat more potato chips. We don’t take that as good evidence that really deep down you want to eat a lot of potato chips, and you are just avoiding thinking about it all the rest of the time to stop yourself from going crazy. How is that different?

Are there other reasons to pay special attention to the importance of pain to people who are actually experiencing it?

Added: I think I have a very low pain threshold, and am in a lot of pain far more often than most people. I also have bad panic attacks from time to time, which I consider more unpleasant than any pain I have come across, and milder panic attacks frequently. So it’s not that I don’t know what I’m talking about. I agree that suffering comes with (or consists of) an intense urge to stop the suffering ASAP. I just don’t see that this means that I should submit to those urges the rest of the time. To the contrary! It’s bad enough to devote that much time to such obsessions. When I am not in pain I prefer to work on other goals I have, like writing interesting blog posts, rather than say trying to discover better painkillers. I am not willing to experiment with drugs that could help if I think they might interfere with my productivity in other ways. Is that wrong?

Reasons for Persons

Suppose you are replicated on Mars, and the copy of you on Earth is killed ten minutes later. Most people feel like there is some definite answer to whether the Martian is they or someone else. Not an answer got from merely defining ‘me’ to exclude alien clones or not, but some real me-ness which persists or doesn’t, even if they don’t know which. In Reasons and Persons, Derek Parfit argues that there is no such thing. Personal identity consists of physical facts such as how well I remember being a ten year old and how much my personality is similar to that girl’s. There is nothing more to say about whether we are the same person than things like this, plus pragmatic definitional judgements, such as that a label should only apply to one person at a given time. He claims that such continuity of memories and other psychological features is what matters to us, so as long as that continuity exists it shouldn’t matter whether we decide to call someone ‘me’ or ‘my clone’.

I agree with him for the most part. But he is claiming that most people are very wrong about something they are very familiar with. So the big question must be why everyone is so wrong, and why they feel so sure of it. I have had many a discussion where my conversational partner insists that if they were frozen and revived, or a perfect replica were made of them, or whatever, it would not be them. 

To be clear, what exactly is this fallacious notion of personal identity that people have?

  • – each human has one and only one, which lasts with them their entire life
  • – If you cease to have it you are dead, because you are it
  • – it doesn’t wax or wane, it can only be present or absent.
  • – it is undetectable (except arguably from the inside)
  • – two people can’t have the same one, even if they both split from the same previous person somehow.
  • – They are unique even if they have the same characteristics – if I were you and you were me, our identities would be the other way around from how they are, and that would be different from the present situation.

So basically, they are like unique labels for each human which label all parts of that human and distinguish it from all other humans. Except they are not labels, they are really there, characterising each creature as a particular person.

I suspect then the use of such a notion is a basic part of conducting social relationships. Suppose you want to have nuanced relationships, with things like reciprocation and threats and loyalty, with a large number of other monkeys. Then you should be interested in things like which monkey today is the one who remembers that you helped them yesterday, or which is the one who you have previously observed get angry easily.

This seems pretty obvious, but that’s because you are so well programmed to do it.There are actually a lot of more obvious surface characteristics you could pay attention to when categorising monkeys for the purpose of guessing how they will behave: where they are, whether they are smiling, eating, asleep. But these are pretty useless next to apparently insignificant details such as that they have large eyes and a hairier than average nose, which are important because they are signs of psychological continuity. So you have to learn to categorize monkeys, unlike other things, by tiny clues to some hidden continuity inside them. There is no need for us to think of ourselves as tracking anything complicated, like a complex arrangement of consistent behaviours that are useful to us, so we just think of what we care about in others as an invisible thing which is throughout a single person at all times and never in any other people.

The clues might differ over time. The clues that told you which monkey was Bruce ten years ago might be quite different from the ones that tell you that now. Yet you will do best to steadfastly believe in a continuing Bruceness inside all those creatures. Which is because even if he changes from an idealistic young monkey to a cynical old monkey, he still remembers that he is your friend, and all the nuances of your relationship, which is what you want keep track of. So you think of his identity as stretching through an entire life, and of not getting stronger or weaker according to his physical details.

One very simple heuristic for keeping track of these invisible things is that there is only ever one instantiation of each identity at a given time. If the monkey in the tree is Mavis, then the monkey on the ground isn’t. Even if they are identical twins, and you can’t tell them apart at all, the one you are friends with will behave differently to you than the one whose nuts you stole, so you’d better be sure to conceptualise them as different monkeys, even if they seem physically identical.

Parfit argues that what really matters – even if we don’t appreciate it because we are wrong about personal identity – is something like psychological or physical continuity. He favours psychological if I recall. However if the main point of this deeply held belief in personal identity is to keep track of relationships and behavioural patterns, that suggests that what really matters to us in that vicinity is more limited than psychological continuity. A lot of psychological continuity is irrelevant for tracking relationships. For instance if you change your tastes in food, or have a terrible memory for places, or change over many years from being reserved to being outgoing, people will not feel that you are losing who you are. However if you change your loyalties, or become unable to recognise your friends, or have fast unpredictable shifts in your behaviour I think people will.

Which is not to say I think you should care about these kinds of continuity when you decide whether an imperfect upload would still be you. I’m just hypothesising that these are the things that will make people feel like ‘what matters’ in personal identity has been maintained, should they stop thinking what matters is invisible temporal string. Of course what you should call yourself, for the purpose of caring disproportionately about it and protecting its life is a matter of choice, and I’m not sure any of these criteria is the best basis for it. Maybe you should just identify with everyone and avoid dying until the human race ends.

What to not know

I just read ‘A counterexample to the contrastive account of knowledge‘ by Jason Rourke, at the suggestion of John Danaher. I’ll paraphrase what he says before explaining why I’m not convinced. I don’t actually know much more about the topic, so maybe take my interpretation of a single paper with a grain of salt. Which is not to imply that I will tell you every time I don’t know much about a topic.

Traditionally ‘knowing’ has been thought of as a function of two things: the person who does the knowing, and the thing that they know. The ‘Contrastive Account of Knowledge’ (CAK) says that it’s really a function of three things – the knower, the thing they know, and the other possibilities that they have excluded.

For instance I know it is Monday if we take the other alternatives to be ‘that it is Tuesday and my computer is accurate on this subject’, etc. I have excluded all those possibilities just now by looking at my computer. However if alternatives such as that of it being Tuesday and my memory and computer saying it is Monday are under consideration, then I don’t know that it’s Monday. Whether I have the information to say P is true depends on what’s included in not-P.

So it seems to me CAK would be correct if there were no inherent set of alternatives to any given proposition, or if we often mean to claim that only some of these alternatives have been excluded when we say something is known. It would be wrong if knowing X didn’t rely on any consideration of the mutually exclusive alternatives, and unimportant if there were a single set of alternatives determined by the proposition whose truth is known, which is what people always mean to consider.

Rourke seems to be arguing that CAK is not like what we usually mean by knowledge. He seems to be doing this by claiming that knowing things need not involve consideration of the alternatives. He gives this example:

The Claret Case. Imagine that Holmes and Watson are investigating a crime that occurred during a meeting attended by Lestrade, Hopkins, LeVillard, and no others. The question Who drank claret? is under discussion. Watson announces ‘‘Holmes knows that Lestrade drank claret.’’ Given the question under discussion and the facts described, the alternative propositions that partially constitute the knowledge relation are Hopkins drank claret and LeVillard drank claret.

He then argues basically that Holmes can know that Lestrade drank claret without knowing that Hopkins and LeVillard didn’t drink claret, since all their claret drinking was independent. He thinks this contradicts CAK because he claims, using CAK,

 The logical form of Watson’s announcement, then, is Holmes knows that Lestrade drank claret rather than Hopkins drank claret or LeVillard drank claret.

Whereas we want to say that Holmes does know Lestrade drank claret, if for instance he sees Lestrade drinking claret, and he need not necessarily know anything about what Hopkins and LeVillard were up to.

Which prompts the question why Rourke thinks these other guys’ drinking are the alternatives to Lestrade drinking in the knowledge relation. The obvious real alternative to exclude is that Lestrade didn’t drink.

Rourke gets to something like this as a counterargument, and argues against it. He says that if ‘who drank claret?’ is interpreted as ‘work out whether or not each person drank claret’ then it can be divided up in this way into ‘Lestrade drank claret’ vs. ‘Lestrade did not drink claret’ combined with ‘Hopkins drank claret’ vs ‘Hopkins did not drink claret’ etc. However if the question is meant as something like ‘who is a single person who drank claret?’, then ‘knowing’ the answer to this question doesn’t require excluding all the alternative answers to this question, some of which may be true.

As far as I can tell, this seems troublesome because he supposes that the alternatives to the purported knowledge must be the various other possible answers to the question, if what you supposedly know is ‘the answer to the question’. The alternative answers to such a question can only be positive reports of different people drinking, or that nobody drank. The question doesn’t ask for any mentions of who didn’t drink. So what can we contrast ‘Lestrade drank’ with, if not ‘Lestrade didn’t drink’?

But why suppose that the alternatives must be  the other answers to the question? If ‘knowing who drank claret’ just means knowing that a certain answer to that question is true rather than false for instance, there seems no problem. For instance perhaps ‘I know who drank’ means that I know ‘Lestrade did’ is one answer to the question. This can happily be contrasted with ‘Lestrade did’ not being an answer for instance. Why not suppose ‘I know who drank claret’ is shorthand for something like that?

It seems that at least for any specific state of the world, it’s possible to think of knowing it in terms of excluding the alternatives. It also seems answering more difficutly worded questions such as the one above must still be based on knowledge about straightforward states of the world. So how could knowledge of at least one person who drank for instance not be understandable in terms of excluding alternatives?

What ‘believing’ usually is

Experimental Philosophy discusses the following experiment. Participants were told a story of Tim, whose wife is cheating on him. He gets a lot of evidence of this, but tells himself it isn’t so.

Participants given this case were then randomly assigned to receive one of the two following questions:

  • Does Tim know that Diane is cheating on him?
  • Does Tim believe that Diane is cheating on him?

Amazingly enough, participants were substantially more inclined to say yes to the question about knowledge than to the question about belief.

This idea that knowledge absolutely requires belief is sometimes held up as one of the last bulwarks of the idea that concepts can be understood in terms of necessary conditions, but now we seem to be getting at least some tentative evidence against it. I’d love to hear what people think.

I’m not surprised – often people say explicitly things like ‘I know X, but I really can’t believe it yet’. This seems uninteresting from the perspective of epistemology. ‘Believe’ in common usage just doesn’t mean the same as what it means in philosophy. Minds are big and complicated, and ‘believing’ is about what you sincerely endorse as the truth, not what seems likely given the information you have. Your ‘beliefs’ are probably related to your information, but also to your emotions and wishes and simplifying assumptions among other things. ‘Knowing’ on the other hand seems to be commonly understood as about your information state. Though not always – for instance ‘I should have known’ usually means ‘in my extreme uncertainty, I should have suspected enough to be wary’. At any rate, in common use knowing and believing are not directly related.

This is further evidence you should be wary of what people ‘believe’.