Category Archives: 1

Reputation bets

People don’t often put their money where their mouth is, but they do put their reputation where their mouth is all the time. If I say ‘The Strategy of Conflict is pretty good’ I am betting some reputation on you liking it if you look at it. If you do like it, you will think better of me, and if you don’t, you will think worse. Even if I just say ‘it’s raining’, I’m staking my reputation on this. If it isn’t raining, you will think there is something wrong with me. If it is raining, you will decrease your estimate of how many things are wrong with me the teensiest bit.

If we have reputation bets all the time, why would it be so great to have more money bets? 

Because reputation bets are on a limited class of propositions. They are all of the form ’doing X will make me look good’. This is pretty close to betting that an observer will endorse X. Such bets are most useful for statements that are naturally about what the observer will endorse. For instance (a) ’you would enjoy this blog’ is pretty close to (b) ‘you will endorse the claim that you would enjoy this blog’. It isn’t quite the same – for instance, if the listener refuses to look at the blog, but judges by its title that it is a silly blog, then (a) might be true while (b) is false. But still, if I want to bet on (a), betting on (b) is a decent proxy.

Reputation bets are also fairly useful for statements where the observer will mostly endorse true statements, such as ‘there is ice cream in the freezer’. Reputation bets are much less useful (for judging truth) where the observer is as likely to be biased and ignorant as the person making the statement. For instance, ‘removing height restrictions on buildings would increase average quality of life in our city’. People still do make reputation bets in these cases, but they are betting on their judgment of the other person’s views.

If the set of things where people mostly endorse true answers is roughly the set where it is pretty clear what the true answer is, then reputation bets do not buy much in the quest for truth. This seems not quite right though. One thing reputation bets do buy is prompting investment in finding out the answer, if it is somewhat expensive but worth finding out if it is a certain way. For instance, if it looks like all the restaurants are closed today so you want to turn around and go home, and I say ‘no, I promise the sushi place will be open’, then I am placing a reputation bet. It wouldn’t have been worth checking before, but my betting increases your credence that it is open, making it worth checking, which in turn provides the incentive for me to bet correctly.

Another place reputation bets are helpful is if a thing will be discovered clearly in the relatively near future, and it is useful to know beforehand. For instance, we can have a whole discussion of what we will do when we get back to my apartment that implies certain facts about my apartment. You can believe these ahead of time, and plan, because you will think worse of me if when we get there it turns out I made the whole thing up.

Good intuitions

Sometimes people have ‘good intuitions’. Which is to say something like, across a range of questions, they tend to be unusually correct for reasons that are hard to explain explicitly.

How do people come to have good intuitions? My first guess is that new intuitions are born from looking at the world, and naturally interpreting it using a bunch of existing intuitions. For instance, suppose I watch people talking for a while, and I have some intuitions about how humans behave, what they want, what their body language means, and how strategic people tend to be. Then I might come to have an intuition for how large a part status plays in human interactions, which I could then go on to use in other cases. If I had had different intuitions about those other things, or watched different people talking, I might have developed a different intuition about the relevance of status.

On this model, when a person has consistently unusually good intuitions, it could be that:

A) Their innate intuition forming machinery is good: perhaps they form hypotheses easily, or they avoid forming hypotheses too easily. Or they absorb others’ useful words into their intuitions easily.

B) They had a small number of particularly useful early intuitions, that tend to produce good further intuitions in the presence of the outside world.

C) They have observed more or higher quality empirical data across the areas where they have superior intuitions.

D) They got lucky, and randomly happen to have a lot of good intuitions instead of bad intuitions.

Which of these plays the biggest part seems important, for:

  • Judging intuitions in hard or unusual areas: If A), then good intuitions are fairly general. So good intuitions about math (testable) suggest good intuitions about how to avoid existential risk (harder to test). This is decreasingly the case as we move down the alphabet.
  • Spreading good intuitions: If B), then it might be possible to distill the small number of core intuitions a person with good intuitions has, and share them with other people.

I expect some of all of A-D play a part (and that I have forgotten more possibilities). But are some of them particularly common in people who have surprisingly good intuitions?

Ethicists should look less ethical

Are ethicists more ethical than other people? Philosopher Eric Shwitzgebel has investigated this at some length and basically says no. He suggests this is because ethicists think of themselves as employed to think about ethics, not to be personally more ethical, and they are just not any more ethically ambitious than other people.

To find out if ethicists are more ethical, he checked how well they behave according to commonly held moral views. How much they steal library books, how much they call their mothers, how often they eat meat, and so on.

This seems like a poor way to judge the ethicalness of ethicists. Unless I am mistaken, the whole point of doing ethics research is to change our understanding of ethics. Successful ethics research then should lead to aiming to be ‘less ethical’, all things equal, if the measure of ethicalness is agreement with pre-existing or commonsense ethical norms.

Similarly, if your navigator directs you along a different route to the one you would have guessed, this suggests your navigator might actually be adding value.

The more troubling claim is that ethicists are apparently about as ethical as other people, rather than less ethical. This is not all that damning, since popular ethics is mostly deontological, and you could rearrange a lot of human behavior without much affecting adherence to a few deontological constraints. For instance, you can change which charities you give to substantially without affecting whether you give to charity and whether you kill anyone directly. Also, presumably a given ethicist studies some narrow set of activities, and is unlikely to have made progress on calling her mother aberrantly or whatever you happen to ask her about.

Ethicists like Peter Singer do manage to have views that at least sound like an ethical step backwards to the average person. Which seems like a good sign about whether they might be getting anywhere with the research.

I actually doubt that ethicists are much more ethical than other people. I just object to concluding that they are not with experiments that wouldn’t tell you if they were.

Aisle seat theorizing

I recently went on some planes. Here is what I think about whenever I go on a plane.

A basic plane has N rows of M seats, divided by an aisle. For instance, maybe N=50 and M=6.

New Doc 10_1The standard routine by which people exit a plane looks like this. First, about two thirds of the people positioned next to the aisle stand up and prepare their items to leave. Then the first two or so walk off the plane (the people from seats x and w in the diagram). Then the two people in seats v and y get out, prepare their things, and walk off. Then u and z do that. Then the next couple of people in the original queue walk off, allowing the people in their row to climb out, get their stuff, and leave. And so on, for every row.

Let:
A be the time it takes to prepare one’s things to go
B be the time it takes to walk one seat forward in the plane (such that it takes the last person NB to walk from their seat to the front of the plane)

Then the time it takes for the usual procedure is:

time for one row to depart * number of rows + time for last person to walk off

= number of people in a row * time to take your stuff and step forward for the next person to get out * number of rows + number of rows * time to walk forward one row

= M(A+B) * N + BN

= AMN + BMN + BN

Here is an alternative method. Everyone in row Q stands and collects their things. They all walk off the plane. Row R stands and repeats the process. And so on.

Here is how long this procedure would take:

time for one column to depart * number of columns

= (time for a person to get their stuff + time for each person to move one seat forward, allowing the person behind them to start walking + time for the last person to walk all the way off) * number of columns

= (A + B(2N-1)) * M

= AM + 2BMN – BM

(In both cases I assumed that each person can only start walking forward after the person in front of them has moved one seat forward. So the last person in line takes B(N-1) time to start moving, and then BN time to get out. This is probably not quite right, but near enough.)

It might not be intuitively obvious, but in general AMN + BMN + BN is much bigger than AM + 2BMN – BM, if we assume it takes substantially longer to collect your bags than it does to walk a couple of steps forward. In fact, it is (A-B)M(N-1) + BN bigger, if we assume that I can do algebra.

For example if there are fifty rows (N = 50) of 6 seats (M = 6), and gathering your stuff takes ten seconds, and walking forward one seat takes one second, we have:

usual method
= AMN + BMN + BN
= 10*6*50 + 1*6*50 + 1*50
= 3350 seconds

alternative method
= AM + 2BMN – BM
= 10*6 + 2*6*50*1 – 1*6
= 654 seconds

Intuitively, if a whole column can get their stuff together at once, that is a lot faster than everyone standing waiting while one person at the front gets their stuff. It’s bad if A gets multiplied by MN.

Each method is bottlenecked by something happening MN times – as many times as there are seats on the plane. In one case, we are bottlenecked by each person taking their stuff down one at a time and then taking a step forward. In the other, it is just the time it takes to walk forward one seat.

You can’t hope to be faster than about 2BMN: the time it would take for every person to walk off the plane in single file if they wait for the person ahead of them to move one seat forward before they start walking. So my proposed method is not much worse than theoretically optimal.

You might notice that the time to completely empty the plane isn’t the same as the time lost, because how bad it is for the plane to not be emptied yet depends on how many people there are still in it. If people leave fairly evenly throughout the disembarking process, the utilitarian cost is roughly time*total people/2. Intuitively, each person is there for half the time the plane is disembarking. This means the total time is proportional to the value lost, so we can ignore this factor. People don’t quite get out evenly throughout the process in these two procedures, but near enough.

Probably the algebra in this post is wrong in places, but I think the gist is correct.

So, the alternative method seems superior to me. Why isn’t it used?

Anecdotal uncertainty of pain

My experience of pain seems to be somewhat different from that of other people. For instance, for much of the day I wrote this I thought I was probably in pain, though I was unsure exactly where, or how bad it was, or if it was really pain instead of something else. To be clear, it was still unpleasant and fairly distracting.

Sometimes I feel like everything is terrible for five minutes or so before figuring out that the problem is that I’m in physical pain. I even explicitly wonder whether the problem is pain, and decide probably not, before later realizing I was wrong. In such cases I infer that I was in pain all along because it feels more like a picture emerging after staring at a bunch of dots for long enough, rather than something in the world changing. Also, I don’t have any other good explanation for what was so bad earlier.

I point this out because I think the usual folk theory of pain says that pain is a kind of direct experience that you can’t really be confused about. If you don’t know if you are in pain, you aren’t. Pain is a conscious experience, so being in pain implies being aware that you are in pain. Also knowing what the pain is like. I think I kind of assumed something like this until I paid more attention to my own experiences, or until my own experiences became more incomprehensible on this model. I don’t have a well worked out alternative model (maybe others do), but I expect it should the possibility of being consciously confused about basically everything.

I’m also curious about whether I am especially unusual in this regard, or just tend to hear from people who are surprised by this. Are you ever unsure whether you are in pain? Are you ever unsure about its characteristics?