Tag Archives: psychology

When to explain

It is commonly claimed that humans’ explicit conscious faculties arose for explaining to others about themselves and their intentions. Similarly when people talk about designing robots that interact with people, they often mention the usefulness of designing such robots to be able to explain to you why it is they changed your investments or rearranged your kitchen.

Perhaps this is a generally useful principle for internally complex units dealing with each other: have some part that keeps an overview of what’s going on inside and can discuss it with others.

If so, the same seems like it should be true of companies. However my experience with companies is that they are often designed specifically to prevent you from being able to get any explanations out of them. Anyone who actually makes decisions regarding you seems to be guarded by layers of people who can’t be held accountable for anything. They can sweetly lament your frustrations, agree that the policies seem unreasonable, sincerely wish you a nice day, and most importantly, have nothing to do with the policies in question and so can’t be expected to justify them or change them based on any arguments or threats you might make.

I wondered why this strategy should be different for companies, and a friend pointed out that companies do often make an effort at more high level explanations of what they are doing, though not necessarily accurate: vision statements, advertisements etc. PR is often the metaphor for how the conscious mind works after all.

So it seems the company strategy is more complex: general explanations coupled with avoidance of being required to make more detailed ones of specific cases and policies. So, is this strategy generally useful? Is it how humans behave? Is it how successful robots will behave?*

Inspired by an interaction with ETS, evidenced lately by PNC and Verizon

*assuming there is more than one

One-on-one charity

People care less about large groups of people than individuals, per capita and often in total. People also care more when they are one of very few people who could act, not part of a large group. In many large scale problems, both of these effects combine. For instance climate change is being caused by a vast number of people and will affect a vast number of people. Many poor people could do with help from any of many rich people. Each rich person sees themselves as one of a huge number who could help that mass ‘the poor’.

One strategy a charity could use when both of these problems are present at once is to pair its potential donors and donees one-to-one. They could for instance promise the family of 109 Seventeenth St. that a particular destitute girl is their own personal poor person, and they will not be bothered again (by that organisation) about any other poor people, and that this person will not receive help from anyone else (via that organisation). This would remove both of the aforementioned problems.

If they did this, I think potential donors would feel more concerned about their poor person than they previously felt about the whole bunch of them. I also think they would feel emotionally blackmailed and angry. I expect the latter effects would dominate their reactions. If you agree with my expectations, an interesting question is why it would be considered unfriendly behaviour on the part of the charity. If you don’t, an interesting question is why charities don’t do something like this.

What ‘believing’ usually is

Experimental Philosophy discusses the following experiment. Participants were told a story of Tim, whose wife is cheating on him. He gets a lot of evidence of this, but tells himself it isn’t so.

Participants given this case were then randomly assigned to receive one of the two following questions:

  • Does Tim know that Diane is cheating on him?
  • Does Tim believe that Diane is cheating on him?

Amazingly enough, participants were substantially more inclined to say yes to the question about knowledge than to the question about belief.

This idea that knowledge absolutely requires belief is sometimes held up as one of the last bulwarks of the idea that concepts can be understood in terms of necessary conditions, but now we seem to be getting at least some tentative evidence against it. I’d love to hear what people think.

I’m not surprised – often people say explicitly things like ‘I know X, but I really can’t believe it yet’. This seems uninteresting from the perspective of epistemology. ‘Believe’ in common usage just doesn’t mean the same as what it means in philosophy. Minds are big and complicated, and ‘believing’ is about what you sincerely endorse as the truth, not what seems likely given the information you have. Your ‘beliefs’ are probably related to your information, but also to your emotions and wishes and simplifying assumptions among other things. ‘Knowing’ on the other hand seems to be commonly understood as about your information state. Though not always – for instance ‘I should have known’ usually means ‘in my extreme uncertainty, I should have suspected enough to be wary’. At any rate, in common use knowing and believing are not directly related.

This is further evidence you should be wary of what people ‘believe’.

Cheap signaling

Chocolates

Image by J. Paxon Reyes via Flickr

If all this stuff people do is for signaling, wouldn’t it be great if we could find ways of doing it more cheaply? At first glance, this sentiment seems a naive error; the whole point of paying a lot for a box of chocolates is to say you were willing to pay a lot. ‘Costly signaling’ is inherently costly.

But wait. In a signaling model, Type A people can be distinguished from Type B people because they do something that is too expensive for Type B people. One reason this action can be worthwhile for Type As and not for Type Bs is because type As have more to gain by it. A man who really loves his girlfriend cares more about showing her than man who is less smitten. A box of chocolates costs the same to both men, but hopefully only the first will buy it.

But there is another reason an action may be worthwhile for As and not for Bs: the cost is higher for type Bs. Relating some intimate gossip about a famous person is a good signal that you are in close with them because it is expensive for an ignorant person to fake, but very cheap for you to send.

Directly revealing your type can be thought of as an instance of this. Taking off your shirt to reveal your handsome muscles is extremely cheap if you have handsome muscles under your shirt and extremely expensive if you do not.

This kind of signaling can be very cheap. It only needs to be expensive for the kinds of people who don’t do it. And since they don’t do it, that cost is not realised. Whereas in the first kind of case I described (exemplified by chocolates), signaling must be relatively expensive. People of different types each have to pay more than the type below them cares enough to pay. i.e. what the person below thew would gain by being mistaken for the type above.

Cases of the second type, like gossip, are not always cheap. Sometimes it is cheaper for the type who sends the signal to send it, but they still have to pay quite a lot before they shake off the other type. If education is for signaling, it seems it is at least partly like this. University is much easier for smart, conscientious people, but if it were only a week long a lot of others would still put in the extra effort.

There can also be outside costs. For instance talking often works the second way. It is extremely cheap to honestly signal that you are an accountant by saying ‘I’m an accountant’, because the social repercussions of being found out to be lying are costly enough to put most people off lying about things where they would be discovered. While this is cheap both for the signalers and the non-signalers, setting up and maintaining the social surveillance that ensures a cost to liars may be expensive.

So if we wanted to waste less on signaling, one way to make signals cheaper would be to find actions with differences in costs to replace actions with differences in benefits. I’m not sure how to do that – just a thought.

How much do you really love the internet?

Would you give up the internet for a million dollars?

Many people say they would not. If you are one of them, and in a committed relationship, which of the following is true:

a) You would also not give up your partner for a million dollars

b) The internet is more valuable to you than your partner

The first one looks safer. But people change partners a lot, which suggests for many there is much less than a million dollars expected difference between one’s partner and the next best alternative, since the next best alternative frequently scales that gap and becomes the best. If every time a person changed partners the relative value of the new and old partners had changed by around two million dollars in the new partner’s favor, people should pretty soon stop expecting their current partner to be worth so much in the long run.

It’s easy to offer the internet endless love while nobody ever offers you much reward for giving it up. Relationships are an interesting ‘sacred value’ to compare because we really are frequently in a position to give one up permanently for some other benefit.