Category Archives: 1

Obvious points

As mentioned previously, pointing out obvious things seems embarrassing to me. However, it also often seems very valuable. That might seem obvious to you. Even so, this post will elaborate on this obvious point.

The set of things an intellectual would like to claim are obvious will tend to be much larger than the set that is reliably casually inferable by a random person with three minutes to devote to the issue. It is probably even much larger than the set of things reliably inferable by that intellectual earlier in their life. Many questions have obvious answers, while the questions themselves are not obvious. Many questions are obviously important once you notice them, but were not salient beforehand. Many points are obvious intellectually, yet not automatically integrated into one’s worldview and actions. And arguably, the more important and true and valuable a point, the more likely it is to look obvious once you know it.

I sometimes think of considerations that are so obvious to me now that I can barely articulate the converse, yet which it seems I must have been unaware of when younger. In general, if something is too incoherent to articulate, this seems like a strong mark against its appropriateness as a focus of discussion. It’s falsity is probably obvious. So I’m not very inclined to write blog posts about such topics. Yet it would usually have been very valuable for my younger self to read such a post – I’d guess more than hundreds of times as valuable as it is costly for me to write such posts, which is much worse again than the cost to more knowledgeable readers of seeing a discussion of something they already knew. And unless I am unusually dense (in which case my blogging strategy seems unimportant), others probably make similar errors to the ones I seem to have made. So it seems probably socially beneficial to write posts about points as obvious as those.

If writing obvious things is costly to the author, does it matter much that it is socially beneficial? It makes more difference than you might suppose: if the author endorses writing socially beneficial obvious things, then when others see the author writing obvious things, they should less infer that the author thought the point was non-obvious (as long as endorsing this coincides at all with writing things that seem obvious, which appears plausible). On that note then, I just wanted to say how important I think writing obvious things is.

The landscape of altruistic interventions

Suppose you want to figure out what the best things to do are. One approach is to start by prioritizing high level causes: is it better broadly to work on developing world health, or on technological development? Then you can work your way downwards: is it better to work on treating infectious diseases or on preventative measures? Malaria or HIV? Direct bed-net distribution or political interventions? Which politician? Which tactic? Which day?

This should work well if the landscape of interventions is kind of smooth – if the best interventions are found with the pretty excellent interventions, which are in larger categories with the great interventions, etc. This approach might work well for finding a person who really likes hockey for instance. The extreme hockey lovers will be found with the fairly enthusiastic hockey lovers, who will probably ultimately be in countries of hockey lovers. It should not on the other hand work very well for finding the reddest objects in your house – the most red thing is not likely to be in the room which has the most overall red. Which of these is more similar to finding good altruistic interventions?

This method would work well for finding the reddest things in your house if the redness of things was influenced a lot by color of the lights, and you had very different colored lights throughout your house. Similarly, if most of the variation in value between different altruistic interventions comes from general characteristics of high level causes, we should expect this method to work better there. You might also expect it to work well if the important levels could be mixed and matched – if the best high level cause could be combined with the best generic method of pursuing a cause, and done with the best people. These things seem plausible to me in the case of altruistic interventions, but I’m not really sure. What do you think?

High level climate intervention considerations

I’ve lately helped Giving What We Can extend their charity evaluation to climate change mitigation charities. This is a less abridged draft of a more polished post up on their blog.

Suppose you wanted to prevent climate change. What methods would get you the most emissions reduction for your money?

GWWC research has recently tried to answer this question, with a preliminary investigation of a number of climate change mitigation charities. Another time, I’ll discuss our investigation and its results in more detail. This time I’m going to tell you about some of the high level arguments and considerations we encountered for focusing on some kinds of mitigation methods over others.

The binding budget consideration

The world’s nations have been trying to negotiate agreements, limiting their future emissions in concert. The emissions targets chosen in such agreements are intended to sum up to meet a level deemed ‘safe’. Suppose some day such agreements are achieved. It seems then that any emissions you have reduced in advance will just be extra that someone will be allowed to emit after that agreement.

This argument implies political strategies are better than more direct means of reducing emissions. In particular, political strategies directed at causing such an agreement to come about.

This argument may sound plausible, but note that it relies on the following assumptions:

  1. the probability of such an agreement being formed is not substantially altered by prior emissions reductions

  2. the emissions targets set in such an agreement are not sensitive to the cost of achieving them

  3. such targets will be met, or we will fail to meet them by a similar margin regardless of how far we begin from them.

None of these is very plausible. Agreement seems more likely if it will be cheaper for the parties to uphold, or if it is more expensive to have no agreement. These are both altered by prior emissions reductions. There is no threshold of danger at which targets will automatically be set; more expensive targets are presumably less likely to be chosen. Two degrees is especially likely due to past discussions, however as it becomes harder to meet it becomes less likely to be retained as the goal. The further we begin from the targets we set, the less likely we are to attain them. Overall, it seems unclear whether reducing emissions by a tonne yourself will encourage more or less abatement through future large scale agreements. Either way, it is probably not a large effect. Consequently no adjustment is made for this consideration in our analysis.

Correcting feedback adjustments

Suppose you protect a hectare of rainforest from being felled. The people who would have bought the wood still want wood though, so the price of wood increases a little. This encourages others to fell their forests a little more, canceling some of your gains.

This is how prices work in general: when you buy something, the world makes a bit more of that thing, but not as much as you bought. If you buy a barrel of oil and bury it, you reduce the total oil to be burned, but by less than one barrel. Others respond to the higher price of oil after you buy some by drilling for more.

These considerations are real, and well known by economists. The big question is, how much do these feedbacks reduce the effect of your efforts?

This depends on what are known as the ‘price elasticity of supply’ and the ‘price elasticity of demand’. These measure how much more wood is harvested if the price of wood goes up by one percent, and how much more wood is wanted if the price goes down by one percent. Let’s call these ES and ED. If you ‘buy’ one unit of forest and keep it from being logged, the reduction in logged forest is ED/(ED + ES). Supply and demand elasticities are known for many items. If we can’t find these figures however, we may estimate ES and ED to be roughly equal, so estimate the real effect of reducing logging to be half of what it first seems.

Many other kinds of correcting feedbacks work in a similar way. If you reduce carbon emissions by a tonne, everyone else will be a tiny bit less concerned about climate change in expectation, and make a tiny bit less effort to prevent it. If you put an extra tonne of carbon dioxide in the atmosphere, plants and the oceans will absorb carbon dioxide a tiny bit faster, so the total added to the atmosphere will be less than a tonne.

The selfish tech concern

New technologies could greatly aid climate change mitigation. Unlike many other approaches however, private businesses have large economic incentives to pursue innovation projects. This is often seen as reason to avoid paying for technological progress: if you didn’t donate, businesses would do it anyway. Plus they have probably already taken the good opportunities.

The truth appears to be quite the opposite. Suppose we break projects up into two categories: those that have attracted some private investment, and those that have not. A random project from the first category is actually likely to be better than a project from the second category.

Self-interested companies will invest in clean energy research until the costs exceed the private benefits (the gains that return to them, instead of everyone else). This means at the point that they stop, you know that the costs and the private gains are about equal. If you buy more at this point, to get public gains, on the margin this is close to free for you because private gains almost cancel the costs.

For a random project without private investment, you just know that the private gains are somewhere below the costs. Probably they are far below, so it is substantially more expensive. This could be made up for if it had larger public benefits, but there seems little reason to expect this. In particular, if private and public gains are correlated, you would not expect this. In general, funding extra work on self-interested projects will be more effective than funding projects that only altruists ever cared for.

The worthless tonne concern

What if you can only reduce carbon emissions by a single puny tonne? Or if you have a project to reduce emissions, but it can’t get to the ‘heart of the problem’, merely make a small dent cheaply then run out of steam?

Many people feel that with since climate change is a very big problem, contributing a small amount to its solution is not worth much, compared to completely solving a proportionally smaller problem, such as one person’s illness. If you contribute a tiny bit, other people may not contribute the rest of what is needed to solve the problem. Or China might increase its emissions so much as to dwarf reduction efforts in your country. A common sense is that your efforts have then been wasted.

This would be true if the amount of carbon in the atmosphere didn’t make much difference except at a threshold. That is, if ‘solving climate change’ was worth a lot, while ‘almost solving climate change’ was worth little.

This is not the situation we are in. Firstly, as far as we know the costs from climate change don’t come at big thresholds like that – each extra bit of carbon dioxide in the atmosphere makes climate change a bit worse. ‘Safety’ targets such as two degrees do not signify steep changes in harm. They are lines chosen to represent costs ‘too large’ by some agreed standards, to focus mitigation efforts.

Secondly, even if there were steep thresholds, we don’t know where they are. Which makes reducing emissions on the margin as good in expectation as if there weren’t thresholds, though more chancy. Often your effort will do nothing, while sometimes it does everything. This is similar to running for a bus which leaves at an unknown time – at many times your running won’t help, but sometimes it will make all the difference. Overall, if you run a bit more you’re a bit more likely to catch the bus.

So a tonne of reduced emissions is worth about as much whether it is the only tonne you contribute, or one of millions.

Hidden help complications

Suppose a charity tries to shut down coal plants, and coal plants are indeed shut down. This is not strong evidence that the charity has achieved anything. Other charities may also have been trying to shut down coal plants, and coal plants close for many reasons. On the other hand, the charity may have made many other power plants more likely to close, which you don’t see because they in fact stayed open. How can you say how much good this charity has done?

There is not a simple answer. You will want to find a way to estimate what would have happened otherwise. You will need to decide whether to credit a charity with the difference in probability of outcomes they seem to have caused, or with what actually happened. The former avoids extra randomness and better counts the effort that you want, while the latter is much easier to measure, and harder to manipulate. Another question is whether to credit charities with the marginal or average value of contributing to a project alongside other charities, or something else. For instance, if the first charity working on something makes a large difference, but each added charity helps less, do you divide the gains between them, credit each with almost nothing, or credit each successive one with less?

The unruly future consideration

Suppose you reduce emissions by stopping some forest from being logged. Even if you do a good job of this, it might be hard to protect it from being logged in fifty years. You have bought the people in the future the option of continuing to lock up the carbon, but circumstances and economic incentives will be different, and it’s not clear whether they will take it. If the forest is logged in fifty years, you will have basically delayed some climate change for fifty years, ignoring e.g. short term emissions exacerbating feedbacks and producing more emissions.

Thus protecting the forest reduces most of the harm it appears to in the short term, but an increasingly small fraction of harms moving into the future, as the cumulative probability that it will be logged rises. How much this is worth overall depends on where the harms are concentrated. Increasing costs to the climate moving further from what we are used to suggest costs will be concentrated in the further future. But wealth, technology progress and adaptation push hard in the other direction. Also, people are more likely to continue your mitigation in cases where climate change turns out to be worse in the future. I am not sure the overall effect. This consideration could erode a large fraction of the value of a mitigation project.

***

These have been some of the issues considered in our quest to find the best organizations for turning dollars into reduced greenhouse emissions. If our analyses of them are adequate, next time we will bring you the finest climate change charities a brief investigation can find.

 

An illicit theory of costly signaling

I’m sympathetic to the view that many human behaviors are for signaling. However so far it doesn’t seem like a very tight theory. We have a motley pile of actions labeled as ‘maybe signaling’, connected to a diverse range of characteristics one might want to signal. We have a story for why each would make sense, and also why lots of behaviors that don’t exist would make sense. However I don’t know why we would use the signals we do in particular, or why we would particularly signal the characteristics that we do. When I predict whether a middle class Tahitian man would want to appear to his work colleagues as if he was widely traveled, and whether he would do this by showing them photographs, my answers are entirely based on my intuitive picture of humans and holidays and so on; I don’t see how to derive them from my theory of signaling. Here are two more niggling puzzles:

Why would we use message-specific costly signals for some messages, when we use explicit language + social retribution for so many others?

Much of the time when you speak to others, your values diverge from theirs at least a little. Often they would forward their own interests best by deceiving you, ignoring social costs and conscience. But even in situations where risks from their dishonesty are large, your usual mode of communication is probably spoken or written language.

This is still a kind of costly signaling, as long as if the person faces the right threats of social retribution. Which they usually do I think. If a person says to you that they have a swimming pool, or that they write for the Economist, or that your boyfriend said you should give his car keys to them, you will usually trust them. You are usually safe trusting such claims, because if someone made them dishonestly they could expect to be found out with some probability, and punished. In cases where this isn’t so – for instance if it is a stranger trying to borrow your boyfriend’s car – you will be much less trusting accordingly.

This mode of costly signaling seems very flexible – spoken language can represent any message you might want to send, and the same machinery of social sanctions can be used to guard many messages at once. And we do use this for a lot of our communication. So why do we use different one-off codes for some small class of messages? What sets that class apart?

The main obvious limitation of language + social sanctions is that it requires a threat of social retribution large enough to discourage lying. This might be hard to arrange, if for instance there are very large gains from lying, if lies are hard to find, or if the person who might lie doesn’t rely on good relationships with the people who might be offended by the lies. So maybe we use non-costly signaling in those cases?

In many of those cases we do use a kind of costly signal, yet a different variant again to the kind hypothesized to covertly pervade human interactions. This type of signal is the explicit credential. When a taxi-driver-to-be takes a driving test or has a background check, then displays his qualifications, this is a signaling display. Acquiring these documents is much cheaper for a person who can drive and has a clean background, and you (or the taxi company) know this and treat him differently if he makes these signals. I say this seems different from the social signaling we usually think of because it is explicitly intended as a signal, and everyone readily accepts that that is the goal, and is fine with it. Which almost brings me to the next puzzle. In conclusion, it’s not clear whether the signaling that we usually think of as such mostly occurs in situations where language and social sanctions are hard to use, but it is at least not the only thing used in such cases.

Why is signaling seen as bad? Why don’t we know about our own signaling?

It is often taken as given that signaling is bad. If a person comes to believe that a behavior they once partook in is for signaling, it is not unusual for them to give it up on those grounds alone, without even noticing the step of inference required between ‘is for signaling’ and ‘is bad’. A signaling theory is apparently a cynical theory.

This seems odd, as badness is not implied at all by the theoretical costly signaling model. There, signaling can be bad or good socially, depending on the costs of carrying it out. There are gains from assorting people well – it is better if the good people do the important jobs for instance – but no guarantee that the costs of the fight won’t overwhelm the gains.

Another related oddity is that people are supposed to be mostly unaware that they are signaling. Nobody bats an eyelid when a person claims to realize that they were doing a thing for signaling in the past. Talk of signaling is full of ‘Maybe I’m just doing this for signaling, but …’. Yet in the naive model of human psychology, it is at least a bit odd to be unaware of your motives in taking an action until months later. It’s true that people quite often don’t appear to have a good grasp of their own drives, yet in signaling this seems to be the normal expectation. And again, the theoretical model of costly signaling says nothing of this. It’s not obvious why you should expect this at all, given that model.

Another reason this seems strange is that we do have a lot of other explicit forms of signaling that we are aware of and ok with, as mentioned above (qualifying tests, ID cards, licenses). It is not that we have a problem with spending effort on almost-zero-sum games, or paying costs to look good.

An explanation

I’d like to suggest an explanation: costly signaling (of the message-specific unconscious variety) is largely used to communicate illicit messages. For instance, many messages about one’s own wealth, accomplishments, status, or sexual situation, and other messages about social maneuvering and judgement, seem to be illicit. Such things are also common targets of signaling theories, though my reasons for suggesting this explanation are mostly theoretical.

Illicit messages can’t be honestly transmitted using language and social norms, for a few reasons. Illicit things often shouldn’t be said explicitly, for plausible deniability, to avoid common knowledge, etc. This means you generally can’t use language to communicate illicit things, because language is explicit. This is one reason language + social retribution doesn’t work well for illicit messages. But also if you successfully have plausible deniability or prevent the message spreading far, both of these make social retribution hard to arrange. So implicit messages are quite hard to make honest through language + social retribution. Or through explicit verification for that matter, which is similar. Yet if such messages are to be listened to at all, they need some other guarantee, which other kinds of non-explicit costly signaling can provide. So this would explain the first puzzle.

If we had a set of signals just for illicit messages, it would be very silly to claim that we were aware of sending such things, and perhaps upsetting to believe that we were and to lie about it. So for the usual reasons that people are thought to be unaware of their less desirable tendencies, it wouldn’t be surprising if people were unaware of the signals they were sending. And if such signals were largely used for illicit messages, it would be unsurprising if we universally thought of signaling as an illicit activity. So this would explain the second puzzle.

An unusual counterargument

Oftentimes, the correct response to an argument is ‘your argument appears after cursory investigation to make sense, however the fact that many smart people have never mentioned this to me suggests that there are good counterarguments, so I remain unconvinced’.

I basically never hear this response, which suggests that there are good counterarguments. Or alternatively that it is unappealing to respond accurately in such cases. The latter seems very plausible, because it suggests one cannot assess any argument at the drop of a hat.

If so, what do people actually say instead? My guess is the first argument they can think of that points in the direction that seems right. This seems unfortunate, as the ensuing discussion of that counterargument that nobody believes can’t possibly resolve the debate, nor is of much interest to anyone.