Why we love unimportant things

Consider all the things humans have ever invented. On average, the ones that have been adopted by the most people should be the most useful ones. This seems to be roughly what has happened.

Now consider the ones we get really excited about, and identify with, and celebrate. These are the ones that are not widely adopted. Chairs have been adopted by everyone, because they are great. Nobody ever mentions this. You might think they are just taken for granted because they are old. But consider skis. Skis have been around forever. But they are more controversial than chairs: they have never caught on with some people. Now notice that people who do like skis actually rave about them, and think about them, and consider themselves skiing enthusiasts.

Here are some more unpopular and raved about innovations: drying fruit in the sun, dancing, the iphone, the gin and tonic, the internet, Christianity, watercolour painting, eating a larger meal at lunch time than in the evening, sexual promiscuity, tea

Here are popular uncelebrated innovations: the escalator, the hat, the mobile phone (this was on the other list back when they were rare), the phillips head screwdriver, the computer, queues, tv, bread, floors

Here are the closest things I can think of to counterexamples: the internet (really fits in the first category, but many people who love it must rarely contact with those who don’t and vice versa. Then again people who rave about it often mean to support quite extreme and unorthodox use of it), anti-racism (virtually everyone seems to think they like it, but the ones who rave about it do at least seem to think that others do not), people rave about anything they consciously want at that moment (e.g. they have been standing for ages and they find a chair, or someone brings them a big cake) though they still don’t tend to speak up that item in general or identify with it, sex.

So it seems that we largely celebrate the things that are least important to our actual wellbeing. It even looks to me like the less consensus there is on the value of something, the more impassioned are its fans. At the extreme, when people make up their very own theory or cheesecake or whatever they can often become quite obsessed.

I take all this as a sign that we basically celebrate stuff to draw attention to our identities, not because it’s important.

Hidden motives or innocent failure?

There are many ways in which what humans do differs from what they should do if they wanted to achieve the ends they claim to want to achieve. Some of these are obviously because people don’t really want what they say they want. Few people who claim human life is valuable beyond measure are unaware that small amounts of money can save lives overseas for instance.

On the other hand, many cases are obviously innocent failures of imagination or knowledge. The apparent progress humanity has made over recent millennia is not just a winding path through various signaling equilibria; we have actually thought of better stuff to do. The stone age didn’t end because making everything out of stone stopped being a credible sign of a hardworking personality.

In between there are many interesting puzzles where it isn’t clear whether hidden motives or innocent failure are to blame*. Many people strongly prefer innocent failure as a default, but in general if you can think of some improvement to the status quo, it should be pretty surprising if heaps of other people haven’t also thought of it. Even if your idea is ultimately bad, there should be some signs of people having looked into it if its deficiency isn’t obvious. Often it is clear that people have known of apparently good ideas for ages, with no sign of action. So I think there is quite a case for hidden motives explaining many of these puzzles.

Sometimes when I point out such instances, I say something like ‘ha, you aren’t trying to do what you claim – looks like you are secretly trying to do this other thing instead’. Sometimes I say something like ‘if you are trying to do X, maybe you should try doing it in this way that would achieve X rather than that other way that doesn’t seem to so well’.

I’d like to make clear that my choice of explicitly blaming hidden motives vs. suggesting alternatives as though innocent failure were the cause is not necessarily based on how likely these two explanations are. I think either presentation of such a puzzle should suggest both hypotheses to some extent. If I blame hidden motives and you feel you don’t have those hidden motives, you should question whether you are behaving efficiently. If I blame innocent failure, and you don’t feel compelled to fix the failure, you might question your motives.

I expect the truth is usually a confusing mixture of hidden motives and innocent failure. In many such intrapersonal conflicts, it seems at least clear which side outsiders should be on. For instance if two parts of a person’s mind are interested in helping other people and looking like a nice person respectively, then inasmuch as those goals diverge outsiders should side more with the part who wants to help others, because at least others get something out of that.

Outsiders are also often in a good position to do this, due to their controlling influence on the part who wants to look like a nice person. They are the people to whom you must look nice. This means they can often side with the more altruistic part (or even if there isn’t one, for their own interests) just by insisting on higher standards of credible altruistic behaviour before they will be impressed. This is one good reason for pointing out what people should do better if they really cared, even if it seems unlikely that they do. Even if not a single reader really cares, one can at least hope to give them a measure by which to be more judgemental of others’ hypocrisy.

-:-:-

*The other very plausible explanation for a discrepancy between what seems sensible and what people do is always that people are in fact behaving sensibly, and the perplexed observer is just missing something. While this is presumably common, I will ignore it here.

When to explain

It is commonly claimed that humans’ explicit conscious faculties arose for explaining to others about themselves and their intentions. Similarly when people talk about designing robots that interact with people, they often mention the usefulness of designing such robots to be able to explain to you why it is they changed your investments or rearranged your kitchen.

Perhaps this is a generally useful principle for internally complex units dealing with each other: have some part that keeps an overview of what’s going on inside and can discuss it with others.

If so, the same seems like it should be true of companies. However my experience with companies is that they are often designed specifically to prevent you from being able to get any explanations out of them. Anyone who actually makes decisions regarding you seems to be guarded by layers of people who can’t be held accountable for anything. They can sweetly lament your frustrations, agree that the policies seem unreasonable, sincerely wish you a nice day, and most importantly, have nothing to do with the policies in question and so can’t be expected to justify them or change them based on any arguments or threats you might make.

I wondered why this strategy should be different for companies, and a friend pointed out that companies do often make an effort at more high level explanations of what they are doing, though not necessarily accurate: vision statements, advertisements etc. PR is often the metaphor for how the conscious mind works after all.

So it seems the company strategy is more complex: general explanations coupled with avoidance of being required to make more detailed ones of specific cases and policies. So, is this strategy generally useful? Is it how humans behave? Is it how successful robots will behave?*

Inspired by an interaction with ETS, evidenced lately by PNC and Verizon

*assuming there is more than one

What to not know

I just read ‘A counterexample to the contrastive account of knowledge‘ by Jason Rourke, at the suggestion of John Danaher. I’ll paraphrase what he says before explaining why I’m not convinced. I don’t actually know much more about the topic, so maybe take my interpretation of a single paper with a grain of salt. Which is not to imply that I will tell you every time I don’t know much about a topic.

Traditionally ‘knowing’ has been thought of as a function of two things: the person who does the knowing, and the thing that they know. The ‘Contrastive Account of Knowledge’ (CAK) says that it’s really a function of three things – the knower, the thing they know, and the other possibilities that they have excluded.

For instance I know it is Monday if we take the other alternatives to be ‘that it is Tuesday and my computer is accurate on this subject’, etc. I have excluded all those possibilities just now by looking at my computer. However if alternatives such as that of it being Tuesday and my memory and computer saying it is Monday are under consideration, then I don’t know that it’s Monday. Whether I have the information to say P is true depends on what’s included in not-P.

So it seems to me CAK would be correct if there were no inherent set of alternatives to any given proposition, or if we often mean to claim that only some of these alternatives have been excluded when we say something is known. It would be wrong if knowing X didn’t rely on any consideration of the mutually exclusive alternatives, and unimportant if there were a single set of alternatives determined by the proposition whose truth is known, which is what people always mean to consider.

Rourke seems to be arguing that CAK is not like what we usually mean by knowledge. He seems to be doing this by claiming that knowing things need not involve consideration of the alternatives. He gives this example:

The Claret Case. Imagine that Holmes and Watson are investigating a crime that occurred during a meeting attended by Lestrade, Hopkins, LeVillard, and no others. The question Who drank claret? is under discussion. Watson announces ‘‘Holmes knows that Lestrade drank claret.’’ Given the question under discussion and the facts described, the alternative propositions that partially constitute the knowledge relation are Hopkins drank claret and LeVillard drank claret.

He then argues basically that Holmes can know that Lestrade drank claret without knowing that Hopkins and LeVillard didn’t drink claret, since all their claret drinking was independent. He thinks this contradicts CAK because he claims, using CAK,

 The logical form of Watson’s announcement, then, is Holmes knows that Lestrade drank claret rather than Hopkins drank claret or LeVillard drank claret.

Whereas we want to say that Holmes does know Lestrade drank claret, if for instance he sees Lestrade drinking claret, and he need not necessarily know anything about what Hopkins and LeVillard were up to.

Which prompts the question why Rourke thinks these other guys’ drinking are the alternatives to Lestrade drinking in the knowledge relation. The obvious real alternative to exclude is that Lestrade didn’t drink.

Rourke gets to something like this as a counterargument, and argues against it. He says that if ‘who drank claret?’ is interpreted as ‘work out whether or not each person drank claret’ then it can be divided up in this way into ‘Lestrade drank claret’ vs. ‘Lestrade did not drink claret’ combined with ‘Hopkins drank claret’ vs ‘Hopkins did not drink claret’ etc. However if the question is meant as something like ‘who is a single person who drank claret?’, then ‘knowing’ the answer to this question doesn’t require excluding all the alternative answers to this question, some of which may be true.

As far as I can tell, this seems troublesome because he supposes that the alternatives to the purported knowledge must be the various other possible answers to the question, if what you supposedly know is ‘the answer to the question’. The alternative answers to such a question can only be positive reports of different people drinking, or that nobody drank. The question doesn’t ask for any mentions of who didn’t drink. So what can we contrast ‘Lestrade drank’ with, if not ‘Lestrade didn’t drink’?

But why suppose that the alternatives must be  the other answers to the question? If ‘knowing who drank claret’ just means knowing that a certain answer to that question is true rather than false for instance, there seems no problem. For instance perhaps ‘I know who drank’ means that I know ‘Lestrade did’ is one answer to the question. This can happily be contrasted with ‘Lestrade did’ not being an answer for instance. Why not suppose ‘I know who drank claret’ is shorthand for something like that?

It seems that at least for any specific state of the world, it’s possible to think of knowing it in terms of excluding the alternatives. It also seems answering more difficutly worded questions such as the one above must still be based on knowledge about straightforward states of the world. So how could knowledge of at least one person who drank for instance not be understandable in terms of excluding alternatives?

One-on-one charity

People care less about large groups of people than individuals, per capita and often in total. People also care more when they are one of very few people who could act, not part of a large group. In many large scale problems, both of these effects combine. For instance climate change is being caused by a vast number of people and will affect a vast number of people. Many poor people could do with help from any of many rich people. Each rich person sees themselves as one of a huge number who could help that mass ‘the poor’.

One strategy a charity could use when both of these problems are present at once is to pair its potential donors and donees one-to-one. They could for instance promise the family of 109 Seventeenth St. that a particular destitute girl is their own personal poor person, and they will not be bothered again (by that organisation) about any other poor people, and that this person will not receive help from anyone else (via that organisation). This would remove both of the aforementioned problems.

If they did this, I think potential donors would feel more concerned about their poor person than they previously felt about the whole bunch of them. I also think they would feel emotionally blackmailed and angry. I expect the latter effects would dominate their reactions. If you agree with my expectations, an interesting question is why it would be considered unfriendly behaviour on the part of the charity. If you don’t, an interesting question is why charities don’t do something like this.