Tag Archives: psychology

I am anti-awareness and you should be too

People seem to like raising awareness a lot. One might suspect too much, assuming the purpose is to efficiently solve whatever problem the awareness is being raised about. It’s hard to tell whether it is too much by working out how much is the right amount then checking if it matches what people do. But a feasible heuristic approach is to consider factors that might bias people one way or the other, relative to what is optimal.

Christian Lander at Stuff White People Like suggests some reasons raising awareness should be an inefficiently popular solution to other people’s problems:

This belief [that raising awareness will solve everything] allows them to feel that sweet self-satisfaction without actually having to solve anything or face any difficult challenges…

What makes this even more appealing for white people is that you can raise “awareness” through expensive dinners, parties, marathons, selling t-shirts, fashion shows, concerts, eating at restaurants and bracelets.  In other words, white people just have to keep doing stuff they like, EXCEPT now they can feel better about making a difference…

So to summarize – you get all the benefits of helping (self satisfaction, telling other people) but no need for difficult decisions or the ensuing criticism (how do you criticize awareness?)…

He seems to suspect that people are not trying to solve problems, but I shan’t argue about that here. At least some people think that they are trying to effectively campaign; this post is concerned with biases they might face. Christian  may or may not demonstrate a bias for these people. All things equal, it is better to solve problems in easy, fun, safe ways. However if it is easier to overestimate the effectiveness of easy, fun, safe things,  we probably raise awareness too much. I suspect this is true. I will add three more reasons to expect awareness to be over-raised.

First, people tend to identify with their moral concerns. People identify with moral concerns much more than they do with their personal, practical concerns for instance. Those who think the environment is being removed too fast are proudly environmentalists while those who think the bushes on their property are withering too fast do not bother to advertise themselves with any particular term, even if they spend much more time trying to correct the problem. It’s not part of their identity.

People like others to know about their identities. And raising awareness is perfect for this. Continually incorporating one’s concern about foreign forestry practices into conversations can be awkward, effortful and embarrassing. Raising awareness displays your identity even more prominently, while making this an unintended side effect of costly altruism for the cause rather than purposeful self advertisement.

That raising awareness is driven in part by desire to identify is evidenced by the fact that while ‘preaching to the converted’ is the epitome of verbal uselessness, it is still a favorite activity for those raising awareness, for instance at rallies, dinners and lectures. Wanting to raise awareness to people who are already well aware suggests that the information you hope to transmit is not about the worthiness of the cause. What else new could you be showing them? An obvious answer is that they learn who else is with the cause. Which is some information about the worthiness of the cause, but has other reasons for being presented. Robin Hanson has pointed out that breast cancer awareness campaign strategy relies on everyone already knowing about not just breast cancer but about the campaign. He similarly concluded that the aim is probably to show a political affiliation.

These are some items given away to promote Bre...

Image via Wikipedia

In many cases of identifying with a group to oppose some foe, it is useful for the group if you often declare your identity proudly and commit yourself to the group. If we are too keen to raise awareness about our identites, perhaps we are just used to those cases, and treat breast cancer like any other enemy who might be scared off by assembling a large and loyal army who don’t like it. I don’t know. But for whatever reason, I think our enthusiasm for increased awareness of everything is given a strong push by our enthusiasm for visible identifying with moral causes.

Secondly and relatedly, moral issues arouse a  person’s drive to determine who is good and who is bad, and to blame the bad ones. This urge to judge and blame should  for instance increase the salience of everyone around you eating meat if you are a vegetarian. This is at the expense of giving attention to any of the larger scale features of the world which contribute to how much meat people eat and how good or bad this is for animals. Rather than finding a particularly good way to solve the problem of too many animals suffering, you could easily be sidetracked by fact that your friends are being evil. Raising awareness seems like a pretty good solution if the glaring problem is that everyone around you is committing horrible sins, perhaps inadvertently.

Lastly, raising awareness is specifically designed to be visible, so it is intrinsically especially likely to spread among creatures who copy one another. If I am concerned about climate change, possible actions that will come to mind will be those I have seen others do. I have seen in great detail how people march in the streets or have stalls or stickers or tell their friends. I have little idea how people develop more efficient technologies or orchestrate less publicly visible political influence, or even how they change the insulation in their houses. This doesn’t necessarily mean that there is too much awareness raising; it is less effort to do things you already know how to do, so it is better to do them, all things equal. However too much awareness raising will happen if we don’t account for there being a big selection effect other than effectiveness in which solutions we will know about, and expend a bit more effort finding much more effective solutions accordingly.

So there are my reasons to expect too much awareness is raised. It’s easy and fun, it lets you advertise your identity, it’s the obvious thing to do when you are struck by the badness of those around you, and it is the obvious thing to do full stop. Are there any opposing reasons people would tend to be biased against raising awareness? If not, perhaps I should reconsider stopping telling you about this problem and finding a more effective way to lower awareness instead.

Estimation is the best we have

This argument seems common to many debates:

‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.

This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.

Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.

A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’

In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.

I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely.  Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.

Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.

Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?

Know thyself vs. know one another

People often aspire to the ideal of honesty, implicitly including both honesty to themselves and honesty with others. Those who care about it a lot often aim to be as honest as they can bring themselves to be, across circumstances. If the aim is to get correct information to yourself and other people however, I think this approach isn’t the greatest.

There is probably a trade off between being honest with yourself and honest to others, so trying hard to be honest to others only detriments being honest to yourself, which in turn also prevents correct information getting to others.

Why would there be a trade off? Imagine your friend said, ‘I promise that anything you tell me I will repeat to anyone who asks’. How honest would you be with that friend? If you say to yourself that you will report your thoughts to others, why wouldn’t the same effect apply?

Progress in forcing yourself to be honest to others must be somewhat an impediment to being honest to yourself. Being honest with yourself is presumably also a disincentive to your being honest with others later, but that is less of a cost, since if you are dishonest with yourself you are presumably deceiving them about those topics either way.

For example imagine you are wondering what you really think of your friend Errol’s art. If you are committed to truthfully admitting whatever the answer is to Errol or your other friends, it will be pretty tempting to sincerely interpret whatever experience you are having as ‘liking Errol’s art’. This way both you and the others come off deceived. If you were committed to lying in such circumstances, you would at least have the freedom to find out the truth yourself. This seems like the superior option for the truth-loving honesty enthusiast.

This argument relies on the assumptions that you can’t fully consciously control how deluded you are about the contents of your brain, and that the unconscious parts of your mind that control this respond to incentives. These things both seem true to me.

When is investment procrastination?

I suggested recently that the link between procrastination and perfectionism has to do with construal level theory:

When you picture getting started straight away the close temporal distance puts you in near mode, where you see all the detailed impediments to doing a perfect job. When you think of doing the task in the future some time, trade-offs and barriers vanish and the glorious final goal becomes more vivid. So it always seems like you will do a great job in the future, whereas right now progress is depressingly slow and complicated.

This set of thoughts reminds me of those generally present when I consider the likely outcomes of getting further qualifications vs. employment, and of giving my altruistically intended savings to the best cause I can find now vs. accruing interest and spending them later. In general the effect could apply to any question of how long to prepare for something before you go out and do it. Do procrastinators invest more?


Poverty does not respond to incentives

I wrote a post a while back saying that preventing ‘exploitative’ trade is equivalent to preventing an armed threat by eliminating the ‘not getting shot in the head’ option. Some people countered this argument by saying that it doesn’t account for how others respond. If poor people take the option of being ‘exploited’, they won’t get offered such good alternatives in future as they will if they hold out.

This seems unlikely, but it reminds me of a real difference between these situations. If you forcibly prevent the person with the gun to their head from responding to the threat, the person holding the gun will generally want to escape making the threat, as now she has nothing to gain and everything to lose. The world on the other hand will not relent from making people poor if you prevent the poor people from responding to it.

I wonder if the misintuition about the world treating people better if they can’t give in to its ‘coercion’ is a result of familiarity with how single agent threateners behave in this situation. As a side note, this makes preventing ‘exploitative’ trade worse relative to preventing threatened parties acting on threats.