Category Archives: 1

Start taking drugs randomly

I wanted to try a drug. I also wanted to be able to distinguish the direct effects of the drug from the placebo effects of the drug. Most importantly, I wanted to isolate placebo side effects. It seemed there was a chance of real side effects, and that meant that it was basically guaranteed that I would imagine those side effects. However, if I could tell that I was imagining them, I figured I might stop doing it.

The usual way to distinguish placebo effects from real effects is to have a bunch of people, and give half of them the real drug and half the placebo drug. Then if five percent of both groups gets a headache, you know that it’s entirely placebo headache.

Of course, that had already been worked out for this drug. It probably causes a few real side effects, and a few placebo ones. But what I wanted to know was whether the side effects were real in my own particular case.

If you want to figure this out, a natural idea is to get someone else to randomly give you drug or placebo on different days (or on different few-day bouts, if the drug takes time to act). After enough times, you can tell if the drug periods are better or worse.

This is a hard enough experiment to run on yourself if it takes a day for the drug to work. But what if the drug might take a month to work or to have side effects? Someone could randomize whether you get the drug or placebo for whole several month long stints, but at this point it will take you a year to get a few data points, and each one will be as different from the others as several month long periods are wont to be, aside from any effect from the drug. And you will have missed out on taking the drug for half a year.

Here is a different way to do this: randomize the start date. I asked a friend to pick a random day within the next few months, and then give me placebo pills until that day and from then on, a real pill. He did this by putting all the pills inside big red capsules so that I couldn’t tell them apart, and then put them in a pill advent-calendar, changing over at the right day. He wrote down the date of the switch. Then I took the pills, kept records of the thing I was trying to fix, and watched out for anything that seemed like the feared side effects.

This way, I was basically getting a bunch of somewhat different one-month periods in a row, without needing to allocate a month to each. If it took exactly one month for the drug to maybe start working, and I randomized the start date over three months, this would still be kind of like 90 one month trials, but overlapping, and in a very non-random order. Because day N and day N+1 are likely to have the same or very similar treatment, and are also similar for other reasons, this is hardly like 90 independent trials. But it seems a lot better than three (more) independent trials.

At the end of this, I could learn the true start date and compare it to time series of records. I did this by looking at the records first, and noting whether there was anything in them that looked like either something good happening, or a side-effect beginning. I figured that if I could correctly guess about when the real drug started, I would take that as decent evidence that it made some difference, good or bad.

One good byproduct of this arrangement is that it lowers the chance of actually imagining a side effect. Since on each day there was a very low chance that the drug had just started, I never had much reason to expect side effects if I didn’t already have them. I also didn’t think about it much. Consequently I never experienced the kind of side effects that I would have imagined. I did experience other things at around that time that might have been side effects, and didn’t notice any improvement, so ultimately gave up on the drug.

Alternatively, if you are more pessimistic and respond to a low chance of beginning to take a drug each day by imagining side-effects each day, you can easily tell that these are not real side effects because they will start before the drug. Or if instead you respond by imagining side-effects with low chance each day, they probably won’t line up with the real start date, so you can distinguish, unless the drug also causes side effects long after you started taking it, or with low probability each day. Still then, you can rule out all the things you imagine before the thing starts.

I’m not sure if this is a common method for avoiding placebo side-effects or for getting evidence about the positive effects of a drug, but I don’t remember hearing of other people doing it, so I thought I’d write about it.

Downsides include: you might not get to take the drug for months, and you might miss out on the positive placebo effects for the same reasons that you might miss out on the negative ones. And for some drugs, positive placebo effects are the main point of taking drugs.

Another downside is that you will have more interesting conversations with doctors. At one point I did this with a bunch of different drugs at once, many of which couldn’t be taken together, so I knew that I was maybe taking some unknown non-interacting subset, or nothing. This was complicated to explain to doctors when they asked me if I was taking any other medicines. ‘Um, with maybe 70% chance? No, I don’t know what they are, but I can give you a probability distribution. It’s fine, this boy I’m dating gives them to me and he says they are safe.’

 

Why is effective altruism new and obvious?

Crossposted from the EA forum ages ago. I meant to put it on my own blog then, but somehow failed to it seems.

Ben Kuhn, playing Devil’s advocate:

Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.

The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them. If a meme as simple as effective altruism hasn’t taken root yet, we should at least try to understand why before throwing our weight behind it. The absence of such attempts—in other words, the fact that non-obviousness doesn’t make effective altruists worried that they’re missing something—is a strong indicator against the “effective altruists are actually trying” hypothesis.

I think this is a good point. If you find yourself in a small group advocating for an obvious and timeless idea, and it’s 2014, something a bit strange is probably going on. As a side note, if people actually come out and disagree, this is more worrying and you should really take some time out to be puzzled by it.

I can think of a few reasons that ‘effective altruism’ might seem so obvious and yet the EA movement might only just be starting.

I will assume that the term ‘effective altruism’ is intended to mean roughly what the words in it suggest: helping other people in efficient ways. If you took ‘effective altruism’ to be defined by principles regarding counterfactual reasons and supererogatory acts and so on, I don’t think you should be surprised that it is a new movement. However I don’t think that’s what ‘effective altruism’ generally means to people; rather these recent principles are an upshot of people’s efforts to be altruistic in an effective manner.

Explanation 1: Turning something into a social movement is a higher bar than thinking of it

Perhaps people have thought of effective altruism before, and they just didn’t make much noise about it. Perhaps even because it was obviously correct. There is no temptation to start a ‘safe childcare’ movement because people generally come up with that on their own (whether or not they actually carry it out). On the other hand, if an idea is too obviously correct for anyone to advocate for, you might expect more people to actually be doing it, or trying to do it. I’m not sure how many people were trying to be effective altruists in the past on their own, but I don’t think it was a large fraction.

However there might be some other reason that people would think of the idea, but fail to spread it.

Explanation 2: There are lots of obvious things, and it takes some insight to pick the important ones to emphasize

Consider an analogy to my life. Suppose that after having been a human for many years I decide to exercise regularly and go to sleep at the same time every night. In some sense, these things are obvious, perhaps because my housemates and parents and the media have pointed them out to me on a regular basis basically since I could walk and sleep for whole nights at a time. So perhaps I should not be surprised if these make my life hugely better. However my acquaintences have also pointed out so many other ‘obvious’ things – that I should avoid salt and learn the piano and be nice and eat vitamin tablets and wear make up and so on – that it’s hard to pick out the really high priority things from the rest, and each one takes scarce effort to implement and try out.

Perhaps, similarly, while ‘effectiveness’ and ‘altruism’ are obvious, so are ‘tenacity’ and ‘wealth’ and ‘sustainability’ and so on. This would explain the world taking a very long time to get to them. However it would also suggest that we haven’t necessarily picked the right obvious things to emphasize.

If this were the explanation, I would expect everyone to basically be on board with the idea, just not to emphasize it as a central principle in their life. I’m not sure to what extent this is true.

Explanation 3: Effectiveness and altruism don’t appear to be the contentious points

Empirically, I think this is why ‘effective altruism’ didn’t stand out to me as an important concept prior to meeting the Effective Altruists. As a teen, I was interested in giving all of my money to the most cost-effective charities (or rather, saving it to give later). It was also clear that virtually everyone else disagreed with me on this, which seemed a bit perplexing given their purported high valuation of human lives and the purported low cost of saving them. So I did actually think about our disagreement quite a bit. It did not occur to me to advocate for ‘effectiveness’ or ‘altruism’ or both of them in concert, I think because these did not stand out as the ideas that people were disagreeing over. My family was interested in altruism some of the time, and seemed reasonably effective in their efforts. As far as I could tell, where we differed in opinion was in something like whether  people in foreign countries really existed in the same sense as people you can see do; whether it was ‘okay’ in some sense to buy a socially sanctioned amount of stuff, regardless of the opportunity costs; or whether one should have inconsistent beliefs.

Explanation 4: The disagreement isn’t about effectiveness or altruism

A salient next hypothesis then is that the contentious claim made by Effective Altruism is in fact not about effectiveness or altruism, and is less obvious.

‘Effective’ and ‘altruism’ together sound almost tautologically good. Altruism is good for the world almost by definition, and if you are going to be altruistic, you would be a fool to be ineffective at it.

In practice, Effective Altruism advocates for measurement and comparison. If measurement and comparison were free, this would obviously be a good idea. However since they are not, effective altruism stands for putting more scarce resources into measurement and comparison, when measurement is hard, comparison is demoralizing and politically fraught, and there are many other plausible ways that in practice philanthropy could be improved. For instance, perhaps it’s more effective to get more donors to talk to each other, or to improve the effectiveness of foundation staff at menial tasks. We don’t know, because at this meta level we haven’t actually measured whether measuring things is the most effective thing to do. It seems very plausible, but this is a much easier thing to imagine a person reasonably disagreeing with.

Effective altruists sometimes criticize people who look to overhead ratios and other simple metrics of performance, because of course these are not the important thing. We should care about results. If there is a charity that gets better results, but has a worse overhead ratio, we should still choose it! Who knew? As far as I can tell, this misses the point. Indeed, overhead ratio is not the same as quality. But surely nobody was suggesting that it was. If you were perfectly informed about outcomes, indeed you should ignore overhead ratios. If you are ignorant about everything, overhead ratios are a gazillion times cheaper to get hold of than data on results. According to this open letter, overhead ratios are ‘inaccurate and imprecise’ because 75-85% of organizations incorrectly report their spending on grants. However this means that 15-25% report it correctly, which in my experience is a lot more than even try to report their impact, let alone do it correctly. Again, the question of whether to use heuristics like this seems like an empirical one of relative costs and accuracies, where it is not at all obvious that we are correct.

Then there appear to be real disagreements about ethics and values. Other people think of themselves as having different values to Effective Altruists, not as merely liking their aggregative consequentialism to be ineffective. They care more about the people around them than those far away, or they care more about some kinds of problems than others, and they care about how things are done, not just the outcome. Given the large number of ethical disagreements in the world, and unpopularity of utilitarianism, it is hardly a new surprise that others don’t find this aspect of Effective Altruism obviously good.

If Effective Altruism really stands for pursuing unusual values, and furthermore doing this via zealous emphasis on accurate metrics, I’m not especially surprised that it wasn’t thought of years ago, nor that people disagree. If this is true though, I fear that we are using impolite debate tactics.

 

 

 

Commitments and affordances

[Epistemic status: a thing I’ve been thinking about, and may be more ignorant about than approximately everyone else.]

I don’t seem to be an expert on time management right now. But piecing through the crufty wreckage of my plans and lists and calendars, I do have a certain detailed viewpoint on particular impediments to time management. You probably shouldn’t trust my understanding here, but let me make some observations.

Sometimes, I notice that I wish I did more of some kind of activity. For instance, I recently noticed that I did less yoga than I wanted—namely, none ever. I often notice that my room is less tidy than I prefer. Certain lines of inquiry strike me as neglected and fertile and I have an urge to pursue them. This sort of thing happens to me many times per day, which I think is normal for a human.

The most natural and common response to this is to say (to oneself or to a companion) something like, ‘I should really do some yoga’, and then be done with it. This has the virtue of being extremely cheap, and providing substantial satisfaction. But sophisticated non-hypocritical materialists like myself know that it is better to take some sort of action, right now, to actually cause more yoga in the future. For instance, one could make a note in one’s todo list to look into yoga, or better yet, put a plan into one’s reliable calendar system.

Once you have noticed that merely saying ‘I should really do some yoga’ has little consequence, this seems quite wondrous—a set of rituals that can actually cause your future self to do a thing! What power. Yet somehow, life does not become as excellent as one might think as a result. It instead becomes a constant stream of going to yoga classes that you don’t feel like.

One kind of problem seems to come from drawing conclusions that are too strong from the feeling of wanting to do a thing. For instance, hearing your brain say, ‘Ooh babies! I want babies!’ at a baby, and assuming that means you want babies, and should immediately stop your birth control. This is especially a problem if the part of your brain that wants things (without regard to trade-offs) also follows up with instructions on how to get them. “Oh man, I really love drawing with oil pastels…I should get some…I could set up a little studio in my basement, and enter contests…I should start by buying some pastels on the way home, from that art shop near my house”. I have noticed this before, and now more often think “Oh man, I really love drawing with oil pastels…but probably not enough that it’s worth doing…I’ll put it next to having babies and starting a startup in the pile of nice things I could do if I didn’t have even better things to do”

Another kind of problem, which is what I’m actually trying to write about, is that after establishing that a thing would actually be great to do, it can be very natural to make a commitment to doing it. For instance, because I wanted to do some yoga, I signed up to a yoga class, and put a repeating event in my calendar. Similarly, if I want to see a person, often I will make an appointment to get lunch with them or something, which I am then committed to. Commitments often go badly. The whole idea of being committed is that you will do the thing regardless of your feelings about it at the time. Which has costs—many things are just much worse if you don’t feel like doing them, either because you need to feel like doing them to do them well, or because not feeling like doing them is information about their value, or because doing a thing you don’t feel like is unpleasant in itself.

There are of course upsides to committing—for instance it allows everyone to coordinate their plans, and doing a thing once may be more valuable if you have a strong expectation that you will do it another ten times. I think the error I make is just defaulting to commitments without much concern for whether commitments are appropriate to the situation. My impression is that other people also do this.

If I now want to do some yoga in the future, and I don’t want to commit myself to it, how else can I increase the chance of it happening? The options for influencing my future self’s behavior seem pretty much like those for influencing other people’s behavior (I actually rarely commit other people to doing things against their will). Here are some:

  • cause my future self to notice that yoga is an option
  • let her know about the virtues of doing yoga
  • make yoga salient
  • add further incentives to doing yoga
  • make it easy to do yoga

If I know the virtues of doing yoga, usually my future self will automatically know about them too, so that one isn’t widely applicable. Incentivising doing yoga might be good sometimes, but it sort of suggests that my natural incentives are substantially misaligned with my future self on this, and if that is so, it seems like there are deeper problems e.g. around very high discount rates, that perhaps I should sort out. That is, I’d like it if yoga didn’t just seem appealing because the costs are tomorrow, and I don’t care about tomorrow. Nonetheless, to some extent this is why yoga is appealing, and incentives can help align interests (especially if present me pays for the incentives, instead of stealing from some other future self).

The remaining options—make yoga known, salient, and easy—might be summarised as causing my future self to have an affordance for yoga. They might collectively be achieved by going to yoga once, so that I know where it is and what it involves, and have already paid some of the logistical costs. Also, I will gain a concrete sense of what yoga does that can pop up if I want that kind of thing. My guess is that I should do this kind of thing more often, and that I mostly don’t because I don’t have so much of an affordance for it as I do for making commitments. I haven’t actually tried this a lot however, so I’m not sure how often replacing commitments with affordances is good. It does seems likely good to at least notice that there are often alternatives to commitments, for when you are trying to have a causal influence on your future behavior.

One place I have tried this more is in social engagements. Replacing commitments with affordances is part of the motivation for things like the Berkeley Schelling Point (a regular time and cafe at which people can go if they want to hang out), a breakfast club that I’m part of, and my ‘casual social calendar’, in which I write things I’m doing anyway for which I’d be happy for company (e.g. going to the gym) so that my friends can join me if they feel like it. These have varying levels of overall success, but I think they are all better than higher commitment versions of them would be.

Thanks to Ben Hoffman for conversation that inspired this post.

How do we know our own desires?

Sometimes I find myself longing for something, with little idea what it is.

This suggests that perceiving desire and perceiving which thing it is that is desired by the desire are separable mental actions.

In this state, I make guesses as to what I want. Am I thirsty? (I consider drinking some water and see if that feels appealing.) Do I want to have sex? (A brief fantasy informs me that sex would be good, but is not what I crave.) Do I want social comfort? (I open Facebook, maybe that has social comfort I could test with…)

If I do infer the desire in this way, I am still not directly reading it from my own mind. I am making educated guesses and testing them using my mind’s behavior.

Other times, it seems like I immediately know my own desires. When that happens, am I really receiving them introspectively, or am I merely playing the same inference game more insightfully?

We usually suppose that people are correct about their own immediate desires. They may be wrong about whether they want cookie A or cookie B, because they are misinformed about which one is delicious. But if they think they want to eat something delicious, we trust them on that.

On the model where we are mostly  inferring our desires from more general feelings of wanting, we might expect people are wrong about their desires fairly often.

EA as offsetting

Scott made a good post about vegetarianism.

But the overall line of reasoning sounds to me like:

“There’s a pretty good case that one is morally compelled to pay for people in the developing world to have shoes, because it looks pretty clear now that people in the developing world have feet that can benefit a lot from shoes.

However, there is this interesting argument that it is ok to not buy shoes, and offset the failing through donating a small amount to effective charities.”

— which I think many Effective Altruists would consider at least strange and inefficient way of approaching the question of what one should do, though it does arrive at the correct answer. In particular, why take the detour through an obligation to do something that is apparently not as cost-effective as the offsetting activity? (If it were as cost effective, we would not prefer to do the offsetting activity). That it would be better to replace the first activity with the second seems like it should cast doubt on the reasoning that originally suggested the first activity. Assuming cost-effectively doing good is the goal.

That is, perhaps shoes are cost-effective. Perhaps AMF is. One thing is for sure though: it can’t be that shoes are one of the most cost-effective interventions and can also be cost-effectively offset by donating to AMF instead. If you believe that shoes can be offset, this demonstrates that shoes are less cost-effective than the offset, and so of little relevance to Effective Altruists. We should just do the ‘offset’ activity to begin with.

Does the above line of reasoning make more sense in the case of vegetarianism? If so, what is the difference? I have some answers, but I’m curious about which ones matter to others.