How to motivate women to speak up

Cross posted from Overcoming Bias. Comments there.

***

In mixed groups, women don’t talk as much as men. This is perhaps related to women being perceived as “bitches” if they do, i.e. pushy, domineering creatures whom one would best loath and avoid. Lindy West at Jezebel comments:

…it just goes back to that hoary old double standard—when men speak up to be heard they are confident and assertive; when women do it we’re shrill and bitchy. It’s a cliche, but it’s true. And it leaves us in this chicken/egg situation—we have to somehow change our behavior (i.e. stop conceding and start talking) while simultaneously changing the perception of us (i.e. asserting that assertiveness does not equal bitchiness). But how do you assert that your assertiveness isn’t bitchiness to a culture that perceives assertiveness as bitchiness? And how do you start talking to change the perception of how you talk when that perception is actively keeping you from talking? Answer: UGH, I HAVE NO IDEA…

One problem with asserting that your assertiveness doesn’t indicate bitchiness is that it probably does. If all women know that assertiveness will be perceived as bitchiness then those who are going to be perceived as bitches anyway (due to their actual bitchiness) and those who don’t mind being seen as bitches (and therefore are more likely to be bitches), will be the ones with the lowest costs to speaking up. So mostly the bitches speak, and the stereotype is self-fulfilling.

This model makes it clearer how to proceed. If you want to credibly communicate to the world that women who speak up are not bitches, first you need for the women who speak up to not be bitches. This can happen through any combination of bitches quietening down and non-bitches speaking up. Both are costly for the people involved, so they will need altruism or encouragement from the rest of the anti-stereotype conspiracy. Counterintuitively, not all women should be encouraged to speak more. The removal of such a stereotype should also be somewhat self-fulfilling – as it is reduced, the costs of speaking up decline, and non-bitchy women do it more often.

Interestingly and sadly, this is exactly opposite to the strategy that Lindy finds self-evident:

…But I guess I will start with this pledge I just made up: I, Lindy West, a shrill bitch, do hereby pledge to talk really really loud in meetings if I have something to say, even if dudes are talking louder and they don’t like me. I refuse to be a turtle—unless it is some really loud species of brave turtle with big ideas. I will not hold back just because I’m afraid of being called a loudmouth bitch (or a “trenchmouth loud ass,” which I was called the other day and as far as I can tell is some sort of pirate insult). Also, I will use the fuck out of the internet, because they can’t drown you out on the internet. The end. Amen or whatever.

Signaling bias in philosophical intuition

Cross posted from Overcoming Bias. Comments there.

***

Intuitions are a major source of evidence in philosophy. Intuitions are also a significant source of evidence about the person having the intuitions. In most situations where onlookers are likely to read something into a person’s behavior, people adjust their behavior to look better. If philosophical intuitions are swayed in this way, this could be quite a source of bias.

One first step to judging whether signaling motives change intuitions is to determine whether people read personal characteristics into philosophical intuitions. It seems to me that they do, at least for many intuitions. If you claim to find libertarian arguments intuitive, I think people will expect you to have other libertarian personality traits, even if on consideration you aren’t a libertarian. If consciousness doesn’t seem intuitively mysterious to you, one can’t help wonder if you have a particularly un-noticable internal life. If it seems intuitively correct to push the fat man in front of the train, you will seem like a cold, calculating sort of person. If it seems intuitively fine to kill children in societies with pro-children-killing norms, but you choose to condemn it for other reasons, you will have all kinds of problems maintaining relationships with people who learn this.

So I think people treat philosophical intuitions as evidence about personality traits. Is there evidence of people responding by changing their intuitions?

People are enthusiastic to show off their better looking intuitions. They identify with some intuitions and take pleasure in holding them. For instance, in my philosophy of science class the other morning, a classmate proudly dismissed some point, declaring,’my intuitions are very rigorous’. If his intuitions are different from most, and average intuitions actually indicate truth, then his are especially likely to be inaccurate. Yet he seems particularly keen to talk about them, and chooses positions based much more strongly on they than others’ intuitions.

I see similar urges in myself sometimes. For instance consistent answers to the Allais paradox are usually so intuitive to me that I forget which way one is supposed to err. This seems good to me. So when folks seek to change normative rationality to fit their more popular intuitions, I’m quick to snort at such a project. Really, they and I have the same evidence from intuitions, assuming we believe one anothers’ introspective reports. My guess is that we don’t feel like coming to agreement because they want to cheer for something like ‘human reason is complex and nuanced and can’t be captured by simplistic axioms’ and I want to cheer for something like ‘maximize expected utility in the face of all temptations’ (I don’t mean to endorse such behavior). People identify with their intuitions, so it appears they want their intuitions to be seen and associated with their identity. It is rare to hear a person claim to have an intuition that they are embarrassed by.

So it seems to me that intuitions are seen as a source of evidence about people, and that people respond at least by making their better looking intuitions more salient. Do they go further and change their stated intuitions? Introspection is an indistinct business. If there is room anywhere to unconsciously shade your beliefs one way or another, it’s in intuitions. So it’s hard to imagine there not being manipulation going on, unless you think people never change their beliefs in response to incentives other than accuracy.

Perhaps this isn’t so bad. If I say X seems intuitively correct, but only because I guess others will think seeing X as intuitively correct is morally right, then I am doing something like guessing what others find intuitively correct. Which might be a bit of a noisy way to read intuitions, but at least isn’t obviously biased. That is, if each person is biased in the direction of what others think, this shouldn’t obviously bias the consensus. But there is a difference between changing your answer toward what others would think is true, and changing your answer to what will cause others to think you are clever, impressive, virile, or moral. The latter will probably lead to bias.

I’ll elaborate on an example, for concreteness. People ask if it’s ok to push a fat man in front of a trolley to stop it from killing some others. What would you think of me if I said that it at least feels intuitively right to push the fat man? Probably you lower your estimation of my kindness a bit, and maybe suspect that I’m some kind of sociopath. So if I do feel that way, I’m less likely to tell you than if I feel the opposite way. So our reported intuitions on this case are presumably biased in the direction of not pushing the fat man. So what we should really do is likely further in the direction of pushing the fat man than we think.

Surplus splitting strategy

Cross posted from Overcoming Bias. Comments there.

***

When negotiating over the price of a nice chair at a garage sale, it can be useful to demonstrate there is only twenty dollars in your wallet. When determining whether your friend will make you a separate meal or you will eat something less preferable, it can be useful to have a longterm commitment to vegetarianism. In all sorts of situations where a valuable trade is to be made, but the distribution of the net benefits between the traders is yet to be determined, it can be good to have your hands tied.

If you can’t have your hands tied, the next best thing is to have a salient place to split the benefits. The garage sale owner did this when he put a price tag on the chair. If you want to pay something other than the price on the tag, you have to come up with some kind of reason, such as a credible commitment to not paying over $20. Many buyers will just pay the asking price.

This means manipulating salient ways to split benefits could be pretty profitable. This means people should probably be doing it on purpose. I’m curious to know if and how they do.

Often the default is to keep the way the benefits naturally fall without money (or anything else ‘extra’) changing hands. For instance suppose you come to lunch at my place and we both enjoy this to some extent. The default here is to keep the happiness we got from this, rather than say me paying you $10 on top.

So in such cases manipulating the division of benefits should mostly be done by steering toward more personally favorable variations on the basic plan. e.g. my suggesting you come to my place before you suggest that I come to yours. A straightforward way to get gains here is to just race to be the first to suggest a favorable option, but this is hard because it looks domineering to try to manipulate things in your favor in such a way. Unless you have some particular advantage at suggesting things fast and smoothly, such a race seems costly in expectation.

If in general trying to manipulate a group’s choice seems like a status-move or dominance-move, subtle ways to do this are valuable. Instead of a race to suggest options, you can have a prior race to make the options that you might want to suggest seem more suggestible. For instance if you’d prefer others come to your place than you go to others’ places, you can put a pool at your place, so suggestions to go to your place seem like altruism. If you know a lot of details about another person, you can use one of them to justify assuming that a particular outcome will be better for them. e.g. ‘We all know how much John likes steak, so we could hardly not go to Sozzy’s steak sauna!’. None of this works unless it’s ambiguous which way your own preferences go.

On the other hand if your preferences are very unambiguous, you can also do well. This is because others know your preferences without your having to execute a dominance move to inform them. If their preferences are less clear, it’s hard for them to compete with yours without contesting your status themselves. So arranging for others to know your preferences some other way could be strategic. e.g. If you and I are choosing which dessert to split, and it is common knowledge that I consider chocolate cake to be the high point of human experience, it is unlikely that we will get the carrot cake, even if you prefer it quite strongly.

So, strategy: if it’s clear that you have a pretty strong preference, make it quite obvious but not explicit. If you have a less clear preference, make it look like you have no preference, then position to get the thing you want based on apparently irrelevant considerations.

Even if the default is to transfer no cash, there can be a range of options that are clearly incrementally better for you and worse for me, with no salient division. e.g. If I invite you over for lunch, there are a range of foods I could offer you, some better for you, some cheaper for me. This seems quite similar to determining how much money to pay, given that someone will pay something.

In the lunch case I get to decide how good what I offer you is, and you have to take it or leave it. You can retaliate by thinking better or worse of me. You can’t very explicitly tell me how much you will think better or worse of me though, and you probably have little control over it. Your interpretation of my level of generosity toward you (and thus your feelings) and my expectations of your feelings are both heavily influenced by relevant social norms. So it’s not clear that either of us has much influence over which point is chosen. You could try to seem unforgiving or I could try to seem unusually ascetic, but these have many other effects, so are extreme ways to procure better lunching deals. I suspect this equilibrium is unusually hard to influence personally because there’s basically no explicit communication.

There are then cases where money or peanut butter sandwiches or something does change hands naturally, so ‘no transfer’ is not a natural option. Sometimes there is another default, such as the cost of procuring whatever is being traded. By default businesses put prices on items rather than consumers doing it, which appears to be an issue of convenience. If it’s clear how much surplus is being split, a natural way is to split it evenly. For instance if you and I make $20 busking in the street, it would be strange for you to take more than $10, even if you are a better singer. This fairness norm is again hard to manipulate personally, except by making it more or less salient. But it’s a nice example of a large scale human project to alter default surplus division.

When there are different norms among different groups, you can potentially reap more of it by changing groups. e.g. if you are a poor woman, you might do better in circles where men are expected to pay for many things.

These are just a random bunch of considerations that spring to mind. Do you notice people trying to manipulate default surplus divisions? How?

On the goodness of Beeminder

Cross posted from Overcoming Bias. Comments there.

***

Beeminder.com improves my life a lot. This is surprising: few things improve my life much, and when they do it’s usually because I’m imagining it. Or because they are things that everyone has known about for ages and I am slow on the uptake (e.g. not moving house three times a year, making a habit of eating breakfast, making habits at all). But Beeminder is new, and it definitely helps.

One measurable instrumental benefit of Beeminder is that I have exercised for half an hour or an hour per day on average since last October. Previously I exercised if I needed to get somewhere or if the fact that exercise is good for people crossed my mind particularly forcibly, or if some even less common events occurred. So this is big. It seems to help a lot for other things too, such as working, but the evidence there is weaker since I used to work pretty often anyway. I’m sorry that  I didn’t keep better track.

Unlike many other improvements to my life, I have some guesses about why this is so useful. But first let me tell you the basic concept of Beeminder.

Take a thing you can measure, such as how many pages you have written. Suppose you measure this every day, and enter the data as points in a graph. Suppose also that the graph contains a ‘road’ stretching up ahead of your data, to days that have not yet happened. Then you could play a game of keeping your new data points above the road. A single day below the road and you lose. It turns out this can be a pretty compelling game. This is basically Beeminder.

There are more details. You can change the steepness of the road, but only for a week in the future. So you can fine-tune the challengingness of a goal, but can’t change it out of laziness unless you are particularly forward thinking about your laziness (in which case you probably won’t sign up for this).

There is a lot of leeway in what indicators you measure, and some I tried didn’t help much. The main things I measure lately are:

  • number of 20 minute blocks of time spent working. They have to be continuous, though a tiny bit of interruption is allowed if someone else causes it
  • time spent exercising weighted by the type of exercise e.g. running = 2x dancing = 2 x walking
  • points accrued for doing tasks on my to-do list. When I think of anything I want to do I put it on the list, whether it’s watching a certain movie or figuring out how to make the to do list system better. Some things stay there permanently, e.g. laundry. I assign each task a number of points, which goes up every Sunday if it’s still on the list. I have to get 15 points per day or I lose.

At first glance, it looks like Beeminder is basically a commitment contract: that it gets its force from promising to take your money if you lose. In my experience this seems very minor. I often forget how much money is riding on goals, and seem to keep the ones with no money on about as well as the others. So at least for me the threat of losing money isn’t what’s going on.

What is going on? I think Beeminder – especially the way I use it – actually does a nice job of combining a bunch of good principles of motivation. Here are some I hypothesize:

Concrete steps

In order to use Beeminder for a goal, you need to be clear on how you will quantify progress toward it. This means being explicit about the parts it is made of. You can’t just intend to read more, you have to intend to read one philosophy paper every day. You can’t just intend to do your taxes, you have to intend to finish one of five forms every week. You can’t just intend to ponder whether you’re doing the right thing with your life, you have to intend to spend twenty minutes per week thinking up alternatives. Making a goal concrete enough to quantify it destroys ugh fields and makes it easier to start. ‘What get’s measured gets done’ – just making a concrete metric salient makes it easier to work toward than a similar vague goal.

Small steps

To Beemind a goal, you need to divide it into many small parts, so you can track progress. ‘Finish making my presentation’ might be explicit enough to measure, but the measure will be zero for a long time, then one. Breaking goals up into small steps has nice side effects. It removes ugh fields, induces near mode, makes success likely at any particular step. In Luke Muehlhauser’s terminology, it increases ‘expectancy’ and allows ‘success spirals’*. Trading long term goals for short term ones also avoids the kind of delay that might make it easy to succumb to procrastination.

Don’t break the chain 

Otherwise known as the Seinfeld hack. This might be the main thing that motivates me to keep my Beeminder goals, in the place of the money. Imagine you are skipping rope. You have made it to 70 skips. It was kind of hard, but you’re not so exhausted that you have to stop. You probably feel more compelled to keep going and make it to 80 than you did when you started. In general, once you have successfully done something a string of times, doing it again seems more desirable. Perhaps this is particular to OCD kinds of people, but a Google search suggests many find it useful.

Beeminder is a nicely flexible implementation of this, because the chain is a bit removed from what you are doing. You only have to maintain an average, so you can work extra one day to slack off the next. This doesn’t seem to undermine the motivational effect.

Hard lines in middle grounds

Firm commitments are naturally made to extremes. This is partly due to principled moral stances, which tend to be both extreme and firm. But that’s not all that’s going on. It’s hard to manage a principle of eating 40% less meat. If people want to eat less meat, they either eat none at all, or however much they feel like pushed down in a vague fashion with some bad feelings. The middle of the meat eating spectrum is too slippery for a hard line – it’s hard to tell how much you eat and annoying to track it. ‘None’ is salient and verifiable. In other realms intermediate lines are required: your diet can’t cut eating to zero. So often diets are more vague; which makes them harder to keep.

Similarly, it’s easy to commit to doing something every day, or every Sunday, or every month. It’s harder to commit to do a thing 2.7 times per week on average, because it’s awkward to track or remember this ‘habit’.

Compromised positions are often more desirable than extremes, and desired frequencies are unlikely to match memorable periods. So it’s a pity that vague commitments are harder to keep than firm ones. Often people don’t make commitments at all, because the readily available firm ones are too extreme. This is a big loss.

Beeminder helps with making firm commitments to intermediate positions. Since you only ever need to notice if the slope of your data isn’t steep enough, any rate is as easy to use as a goal. You can commit to eating 40% less meat, you just have to estimate once what 40% is, then record any meat you eat. I’ve used Beeminder to journal on average five nights per week. This is better than every night or no night, but would otherwise be annoying to track.

A small threat to overcome tiny temptations

While working, there are various moments when it would be easier to stop than to continue, particularly if you mostly feel the costs and benefits available in the next second or so, and if you assume that you could start again shortly (related). It is in these moments that I tend to stop and get a drink, or look out of the window, or open my browser or whatnot.

Counting short blocks of continuous time working pretty much solves this problem for me. The rule is that if you stop at all the whole block doesn’t count. So at any given moment there might be a tiny short term benefit to stopping for a second, but there is a huge cost to it. In my case this seems to remove stopping as an option, in the same way that a hundred dollar price on a menu item removes it as an option without apparent expense of willpower.

I originally thought it would be good to measure the amount of work I got done, rather than time spent doing it. This is because I want to get work done, not waste time on it. But given that I am working, I strongly prefer to do good work, fast. So there’s not much need for an added incentive there. I just need an incentive to begin, and one to not stop when a particular moment makes stopping look tasty. In Luke’s terminology, this kills impulsiveness.

Less stress

The long term threat of failing to write an essay is converted into a short term pleasure of winning each night at Beeminder. I’m not sure why this seems like a pleasure, rather than a threat of losing, but it does to me. Probably because losing at Beeminder isn’t that unpleasant or shameful. And how could getting points or climbing a scale not seem like winning? (This is about value in Luke’s terms).

More accuracy

It’s harder to maintain planning fallacy, overconfidence, or expectation of perfection in the future, in light of detailed quantitative data, and a definite trend line.

Just the difference between ‘I should do that’, and ‘I should do that, so how much time will it take?… About two hours, so I guess it should get 20 points.. that probably won’t be enough to compel me to do it soon, but that’s ok, it’s not urgent’ seems to change the mindset to one more sensitive to reality.

***

In sum, I think Beeminder partly works well because it causes you to think of your goals in small, concrete parts which can easily be achieved. It also makes achieving the parts more satisfying, and strings them into an addictive chain of just the right challengingness. Finally it lends itself to experimentation with a wide range of measures of success, such as measuring time blocks or ‘points’, at arbitrary rates. The value from innovations there is probably substantial. So, averse as I am to giving lifestyle advice, if you’re curious about the psychology of motivation in humans, or if you want to improve your life a lot, you should probably take a look at Beeminder.

*you can also increase expectancy by measuring something like time rather than progress.

Grace-Hanson Podcast

Cross posted from Overcoming Bias. Comments there.

***

Robin and I have a new podcast on the subject of play (mp3wavm4a). Older ones are here.

Don’t be thrown by a bit of silence at the start of the m4a one. We also don’t have the time right now to figure out how to put it in better formats. Sorry about that. If anyone else does, and posts such files, I’ll link to them.