The escape duty

I’m going to explain one of my favorite life-improvement techniques over the past couple of years.

I thought of it as a result of talking to Ben Hoffman. He mentioned some innovation that worked for him, and sounded impossible for me. I think it was ‘regularly reflecting on what you are doing and how it could be better’ or something vague and virtuous like that. I’m a big fan of reflecting on one’s life and how to improve it, but doing it at really appropriate times seemed hard because often I’m distracted by other things, especially when things are going badly somehow. ‘Things could be better’ is not a very salient trigger upon which to act. And I had been struggling to allocate time to reflect on my life even when I actually put it in my plan for the day.

But then I realized that there was a thing I already wanted to do exactly when things were going badly—play a computer game. At the time it was a game I shall call SPP.

So I set these rules:

  1. I am not ever allowed to play SPP unless I have first gone to the place on my computer where I reflect, and written anything at all about what is going on in my life and how it could be better.
  2. If I reflect, I may then play SPP for five minutes.

This could be repeated arbitrarily often. Like, I can just swap back and forth between reflecting and playing SPP all afternoon if I want.

Consequently, every time the rest of my life became less appealing than playing SPP, I would briefly think about what was going wrong, and try to fix it. It is easy to remember to play a computer game. It is also easy (for me at least) to remember that I must not do a thing that I often want to do—much more so than it is to remember that I should do a thing that I rarely think of.

This system has worked really well for me I think. If I am feeling bad in any way, I’m very willing to reflect for an arbitrarily short time in order to be blamelessly playing a computer game for five minutes. And once I’m reflecting, I almost always do it for long enough to make a number of concrete improvements to the situation (e.g. put on noise-cancelling headphones and find some painkillers, or think of some way to make the task at hand less complicated). And I feel like this usually helps. Sometimes I become more physically comfortable. Sometimes I realize I should be doing some other activity entirely.

(Writing in particular seems more useful for me than freeform thinking—I might put down ten ideas about what is going wrong, and consider some ways to improve all of them, and then work through the list, which is too complicated for thinking.)

From the perspective of productivity, something like tens of minutes of gaming per day is a pretty good exchange for some well-timed problem solving, and probably pays for itself in terms of other dallying. From the perspective of having fun, this arrangement is more entertaining than most productivity hacks, because it allows me to play computer games about as much as I feel like.

There have been times when I have just gone back and forth between playing games and reflecting for many iterations. For instance, when I’m sick or have a bad headache. I think the likely alternative in those cases would often be to just focus entirely on escapism, and adding the short bouts of iterative improvement has helped me to actually escape from needing escapism faster.

It helps that playing a computer game is a natural response to a variety of problems for me (lack of motivation, physical pain, distractions, anxiety, social distress), but my guess is that other people feel the same way about other procrastinations. I got tired of SPP and didn’t have a new game for a while, which I think made my life worse. Lately I have found another one, and have picked this habit back up again.

I expect there are many reasons this won’t work for other people, but maybe it will for some, and maybe some variant on the underlying idea is useful. I think the underlying idea is something like this: instead of trying to remember to do X at time Y, find thing Z that you generally want to do at time Y, and prohibit it always without X. Or less generally: make some beloved source of procrastination contingent on a small amount of agentic contemplation of your problems.

Meteuphoric games

Back to the original, titular purpose of this blog: getting excited about how things are metaphors for other things. Here are some things that are structurally similar in that they are games where you try to find structural similarities.

They also share the properties of being amusing to me and not generally known, so I thought I’d share them. I don’t know if they are generally amusing—if you try one, feel free to leave a comment about whether you enjoyed it or not.

1. My thought is like…

This is a game I learned as a child. It goes like this:

Kate: [thinks of a thing] My thought is like –

Sarah: Sam Smith

Robert: the word of God

Kate: actually it was a palm tree

Sarah: Hmm, but Sam Smith is basically just like a palm tree. They are both tall, covered with a sparse brown fur, and appreciate dryness. Also, you’ve been thinking about them lately. Also, you are disproportionately interested in getting your hands on their nuts.

Robert: But palm trees are well known to symbolize the word of God. Probably because when you are lost in a metaphorical desert, God’s word guides you to the oasis of salvation. That’s why the faithful carry them on Palm Sunday.

Kate: Sarah makes the best case.

Sarah: [thinks of a thing] My thought is like-

 

2. Crosswords where every clue points to two words that fit in the crossword

Really, making them is the game that most involves finding structural similarities in things—solving them involves the opposite.

For instance, what sentence could you use to describe either of these words at once?

CALVE and CARVE. Maybe, ‘an action that increases the matching items of cow you have’

NET and SET. Maybe, ‘Contains things that share a certain property’

LOADED and LOANED. ‘His family could drive a Tesla because it was ——‘

Here’s one I made earlier, but it’s not great. Here’s a famous crossword that probably inspired this activity indirectly.

 

3. Explain everything in terms of status

One player asks for an explanation of some phenomenon. Others give answers that make the phenomenon largely about status. The first player chooses the best explanation. For warning, I made this game up and have not played it properly. Improper variants are amusing to me, but I’m not sure if to others.

Example: Why do we sleep?

Evolution bothered to develop eyes and ears and legs and flight or fight responses and the tendency to look around sharply if you hear something suspicious in large part because it is extremely dangerous to have no idea what is going on in your environment, and to not be able to run away from it. Yet after all this wariness, animals just shut down, paralyzed, and ignore everything for some large fraction of the time. If this seems potentially very costly, that’s because it is a kind of costly signaling.

One of the main threats to an animal is other animals of the same species. And an important way to fend off attacks from animals of the same species is to make and advertise social alliances. Sleep is a costly signal of having good social alliances. It shows that you trust that you won’t be killed, even if your own activities make it trivially easy to kill you. Looking relaxed and at ease is the universal signal that you believe your opponent would be insane to fight you, because you can beat them from a position of being relaxed and at ease. Being so relaxed that you wouldn’t even notice if you were in a fight is the universal signal that others have got your back.

This also helps explain why people who are scared or lonely find it hard to sleep, and why people who are closely allied like to sleep together.

 

4. Constrained bananagrams: matching triplets

Bananagrams is a game lots of people play. I think I made up the variant where you add constraints. e.g. each word has to rhyme with at least one other word in your grid, or all the words have to be verbs. The matching triplet constraint is that each word must be part of a triplet of words in your grid for which you can give an explanation for why they are basically similar. I think I haven’t actually tried this version with another person because I’ve run out of people in my immediate vicinity who are willing to play variants of bananagrams with me. This one seems somewhat harder than other variants, which are generally substantially harder than Bananagrams.

20160515_102130

In progress game. If I had an opponent, they may not accept that ‘drug’, ‘bot’ and ‘desk’ are things that can make various online conversation partners less coherent.

20151109_221547

Bananagrams constrained to rhyming words

20151114_221102

I forget what the constraint was here, but I think it was fun.

 

5. Codenames with Dixit cards

Codenames is a real game that basically involves trying to give someone a clue that points to four or five of twenty or so words another person is looking at, without cluing the rest. My friends and I like to play the game with Dixit cards instead of words. According to the internet we are not the first to think of this.

pic2664878_lg

A Codenames grid made from Dixit cards. If you were by chance assigned to clue the first eight cards, you might say ‘sword 4’ and hope that 1B and 1E are more swordy than anything else around. I can’t think of anything really good to say here. Picture from Contigo at BoardGameGeek

***

Do you know any more good games for this list?

Start taking drugs randomly

I wanted to try a drug. I also wanted to be able to distinguish the direct effects of the drug from the placebo effects of the drug. Most importantly, I wanted to isolate placebo side effects. It seemed there was a chance of real side effects, and that meant that it was basically guaranteed that I would imagine those side effects. However, if I could tell that I was imagining them, I figured I might stop doing it.

The usual way to distinguish placebo effects from real effects is to have a bunch of people, and give half of them the real drug and half the placebo drug. Then if five percent of both groups gets a headache, you know that it’s entirely placebo headache.

Of course, that had already been worked out for this drug. It probably causes a few real side effects, and a few placebo ones. But what I wanted to know was whether the side effects were real in my own particular case.

If you want to figure this out, a natural idea is to get someone else to randomly give you drug or placebo on different days (or on different few-day bouts, if the drug takes time to act). After enough times, you can tell if the drug periods are better or worse.

This is a hard enough experiment to run on yourself if it takes a day for the drug to work. But what if the drug might take a month to work or to have side effects? Someone could randomize whether you get the drug or placebo for whole several month long stints, but at this point it will take you a year to get a few data points, and each one will be as different from the others as several month long periods are wont to be, aside from any effect from the drug. And you will have missed out on taking the drug for half a year.

Here is a different way to do this: randomize the start date. I asked a friend to pick a random day within the next few months, and then give me placebo pills until that day and from then on, a real pill. He did this by putting all the pills inside big red capsules so that I couldn’t tell them apart, and then put them in a pill advent-calendar, changing over at the right day. He wrote down the date of the switch. Then I took the pills, kept records of the thing I was trying to fix, and watched out for anything that seemed like the feared side effects.

This way, I was basically getting a bunch of somewhat different one-month periods in a row, without needing to allocate a month to each. If it took exactly one month for the drug to maybe start working, and I randomized the start date over three months, this would still be kind of like 90 one month trials, but overlapping, and in a very non-random order. Because day N and day N+1 are likely to have the same or very similar treatment, and are also similar for other reasons, this is hardly like 90 independent trials. But it seems a lot better than three (more) independent trials.

At the end of this, I could learn the true start date and compare it to time series of records. I did this by looking at the records first, and noting whether there was anything in them that looked like either something good happening, or a side-effect beginning. I figured that if I could correctly guess about when the real drug started, I would take that as decent evidence that it made some difference, good or bad.

One good byproduct of this arrangement is that it lowers the chance of actually imagining a side effect. Since on each day there was a very low chance that the drug had just started, I never had much reason to expect side effects if I didn’t already have them. I also didn’t think about it much. Consequently I never experienced the kind of side effects that I would have imagined. I did experience other things at around that time that might have been side effects, and didn’t notice any improvement, so ultimately gave up on the drug.

Alternatively, if you are more pessimistic and respond to a low chance of beginning to take a drug each day by imagining side-effects each day, you can easily tell that these are not real side effects because they will start before the drug. Or if instead you respond by imagining side-effects with low chance each day, they probably won’t line up with the real start date, so you can distinguish, unless the drug also causes side effects long after you started taking it, or with low probability each day. Still then, you can rule out all the things you imagine before the thing starts.

I’m not sure if this is a common method for avoiding placebo side-effects or for getting evidence about the positive effects of a drug, but I don’t remember hearing of other people doing it, so I thought I’d write about it.

Downsides include: you might not get to take the drug for months, and you might miss out on the positive placebo effects for the same reasons that you might miss out on the negative ones. And for some drugs, positive placebo effects are the main point of taking drugs.

Another downside is that you will have more interesting conversations with doctors. At one point I did this with a bunch of different drugs at once, many of which couldn’t be taken together, so I knew that I was maybe taking some unknown non-interacting subset, or nothing. This was complicated to explain to doctors when they asked me if I was taking any other medicines. ‘Um, with maybe 70% chance? No, I don’t know what they are, but I can give you a probability distribution. It’s fine, this boy I’m dating gives them to me and he says they are safe.’

 

Why is effective altruism new and obvious?

Crossposted from the EA forum ages ago. I meant to put it on my own blog then, but somehow failed to it seems.

Ben Kuhn, playing Devil’s advocate:

Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.

The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them. If a meme as simple as effective altruism hasn’t taken root yet, we should at least try to understand why before throwing our weight behind it. The absence of such attempts—in other words, the fact that non-obviousness doesn’t make effective altruists worried that they’re missing something—is a strong indicator against the “effective altruists are actually trying” hypothesis.

I think this is a good point. If you find yourself in a small group advocating for an obvious and timeless idea, and it’s 2014, something a bit strange is probably going on. As a side note, if people actually come out and disagree, this is more worrying and you should really take some time out to be puzzled by it.

I can think of a few reasons that ‘effective altruism’ might seem so obvious and yet the EA movement might only just be starting.

I will assume that the term ‘effective altruism’ is intended to mean roughly what the words in it suggest: helping other people in efficient ways. If you took ‘effective altruism’ to be defined by principles regarding counterfactual reasons and supererogatory acts and so on, I don’t think you should be surprised that it is a new movement. However I don’t think that’s what ‘effective altruism’ generally means to people; rather these recent principles are an upshot of people’s efforts to be altruistic in an effective manner.

Explanation 1: Turning something into a social movement is a higher bar than thinking of it

Perhaps people have thought of effective altruism before, and they just didn’t make much noise about it. Perhaps even because it was obviously correct. There is no temptation to start a ‘safe childcare’ movement because people generally come up with that on their own (whether or not they actually carry it out). On the other hand, if an idea is too obviously correct for anyone to advocate for, you might expect more people to actually be doing it, or trying to do it. I’m not sure how many people were trying to be effective altruists in the past on their own, but I don’t think it was a large fraction.

However there might be some other reason that people would think of the idea, but fail to spread it.

Explanation 2: There are lots of obvious things, and it takes some insight to pick the important ones to emphasize

Consider an analogy to my life. Suppose that after having been a human for many years I decide to exercise regularly and go to sleep at the same time every night. In some sense, these things are obvious, perhaps because my housemates and parents and the media have pointed them out to me on a regular basis basically since I could walk and sleep for whole nights at a time. So perhaps I should not be surprised if these make my life hugely better. However my acquaintences have also pointed out so many other ‘obvious’ things – that I should avoid salt and learn the piano and be nice and eat vitamin tablets and wear make up and so on – that it’s hard to pick out the really high priority things from the rest, and each one takes scarce effort to implement and try out.

Perhaps, similarly, while ‘effectiveness’ and ‘altruism’ are obvious, so are ‘tenacity’ and ‘wealth’ and ‘sustainability’ and so on. This would explain the world taking a very long time to get to them. However it would also suggest that we haven’t necessarily picked the right obvious things to emphasize.

If this were the explanation, I would expect everyone to basically be on board with the idea, just not to emphasize it as a central principle in their life. I’m not sure to what extent this is true.

Explanation 3: Effectiveness and altruism don’t appear to be the contentious points

Empirically, I think this is why ‘effective altruism’ didn’t stand out to me as an important concept prior to meeting the Effective Altruists. As a teen, I was interested in giving all of my money to the most cost-effective charities (or rather, saving it to give later). It was also clear that virtually everyone else disagreed with me on this, which seemed a bit perplexing given their purported high valuation of human lives and the purported low cost of saving them. So I did actually think about our disagreement quite a bit. It did not occur to me to advocate for ‘effectiveness’ or ‘altruism’ or both of them in concert, I think because these did not stand out as the ideas that people were disagreeing over. My family was interested in altruism some of the time, and seemed reasonably effective in their efforts. As far as I could tell, where we differed in opinion was in something like whether  people in foreign countries really existed in the same sense as people you can see do; whether it was ‘okay’ in some sense to buy a socially sanctioned amount of stuff, regardless of the opportunity costs; or whether one should have inconsistent beliefs.

Explanation 4: The disagreement isn’t about effectiveness or altruism

A salient next hypothesis then is that the contentious claim made by Effective Altruism is in fact not about effectiveness or altruism, and is less obvious.

‘Effective’ and ‘altruism’ together sound almost tautologically good. Altruism is good for the world almost by definition, and if you are going to be altruistic, you would be a fool to be ineffective at it.

In practice, Effective Altruism advocates for measurement and comparison. If measurement and comparison were free, this would obviously be a good idea. However since they are not, effective altruism stands for putting more scarce resources into measurement and comparison, when measurement is hard, comparison is demoralizing and politically fraught, and there are many other plausible ways that in practice philanthropy could be improved. For instance, perhaps it’s more effective to get more donors to talk to each other, or to improve the effectiveness of foundation staff at menial tasks. We don’t know, because at this meta level we haven’t actually measured whether measuring things is the most effective thing to do. It seems very plausible, but this is a much easier thing to imagine a person reasonably disagreeing with.

Effective altruists sometimes criticize people who look to overhead ratios and other simple metrics of performance, because of course these are not the important thing. We should care about results. If there is a charity that gets better results, but has a worse overhead ratio, we should still choose it! Who knew? As far as I can tell, this misses the point. Indeed, overhead ratio is not the same as quality. But surely nobody was suggesting that it was. If you were perfectly informed about outcomes, indeed you should ignore overhead ratios. If you are ignorant about everything, overhead ratios are a gazillion times cheaper to get hold of than data on results. According to this open letter, overhead ratios are ‘inaccurate and imprecise’ because 75-85% of organizations incorrectly report their spending on grants. However this means that 15-25% report it correctly, which in my experience is a lot more than even try to report their impact, let alone do it correctly. Again, the question of whether to use heuristics like this seems like an empirical one of relative costs and accuracies, where it is not at all obvious that we are correct.

Then there appear to be real disagreements about ethics and values. Other people think of themselves as having different values to Effective Altruists, not as merely liking their aggregative consequentialism to be ineffective. They care more about the people around them than those far away, or they care more about some kinds of problems than others, and they care about how things are done, not just the outcome. Given the large number of ethical disagreements in the world, and unpopularity of utilitarianism, it is hardly a new surprise that others don’t find this aspect of Effective Altruism obviously good.

If Effective Altruism really stands for pursuing unusual values, and furthermore doing this via zealous emphasis on accurate metrics, I’m not especially surprised that it wasn’t thought of years ago, nor that people disagree. If this is true though, I fear that we are using impolite debate tactics.

 

 

 

Commitments and affordances

[Epistemic status: a thing I’ve been thinking about, and may be more ignorant about than approximately everyone else.]

I don’t seem to be an expert on time management right now. But piecing through the crufty wreckage of my plans and lists and calendars, I do have a certain detailed viewpoint on particular impediments to time management. You probably shouldn’t trust my understanding here, but let me make some observations.

Sometimes, I notice that I wish I did more of some kind of activity. For instance, I recently noticed that I did less yoga than I wanted—namely, none ever. I often notice that my room is less tidy than I prefer. Certain lines of inquiry strike me as neglected and fertile and I have an urge to pursue them. This sort of thing happens to me many times per day, which I think is normal for a human.

The most natural and common response to this is to say (to oneself or to a companion) something like, ‘I should really do some yoga’, and then be done with it. This has the virtue of being extremely cheap, and providing substantial satisfaction. But sophisticated non-hypocritical materialists like myself know that it is better to take some sort of action, right now, to actually cause more yoga in the future. For instance, one could make a note in one’s todo list to look into yoga, or better yet, put a plan into one’s reliable calendar system.

Once you have noticed that merely saying ‘I should really do some yoga’ has little consequence, this seems quite wondrous—a set of rituals that can actually cause your future self to do a thing! What power. Yet somehow, life does not become as excellent as one might think as a result. It instead becomes a constant stream of going to yoga classes that you don’t feel like.

One kind of problem seems to come from drawing conclusions that are too strong from the feeling of wanting to do a thing. For instance, hearing your brain say, ‘Ooh babies! I want babies!’ at a baby, and assuming that means you want babies, and should immediately stop your birth control. This is especially a problem if the part of your brain that wants things (without regard to trade-offs) also follows up with instructions on how to get them. “Oh man, I really love drawing with oil pastels…I should get some…I could set up a little studio in my basement, and enter contests…I should start by buying some pastels on the way home, from that art shop near my house”. I have noticed this before, and now more often think “Oh man, I really love drawing with oil pastels…but probably not enough that it’s worth doing…I’ll put it next to having babies and starting a startup in the pile of nice things I could do if I didn’t have even better things to do”

Another kind of problem, which is what I’m actually trying to write about, is that after establishing that a thing would actually be great to do, it can be very natural to make a commitment to doing it. For instance, because I wanted to do some yoga, I signed up to a yoga class, and put a repeating event in my calendar. Similarly, if I want to see a person, often I will make an appointment to get lunch with them or something, which I am then committed to. Commitments often go badly. The whole idea of being committed is that you will do the thing regardless of your feelings about it at the time. Which has costs—many things are just much worse if you don’t feel like doing them, either because you need to feel like doing them to do them well, or because not feeling like doing them is information about their value, or because doing a thing you don’t feel like is unpleasant in itself.

There are of course upsides to committing—for instance it allows everyone to coordinate their plans, and doing a thing once may be more valuable if you have a strong expectation that you will do it another ten times. I think the error I make is just defaulting to commitments without much concern for whether commitments are appropriate to the situation. My impression is that other people also do this.

If I now want to do some yoga in the future, and I don’t want to commit myself to it, how else can I increase the chance of it happening? The options for influencing my future self’s behavior seem pretty much like those for influencing other people’s behavior (I actually rarely commit other people to doing things against their will). Here are some:

  • cause my future self to notice that yoga is an option
  • let her know about the virtues of doing yoga
  • make yoga salient
  • add further incentives to doing yoga
  • make it easy to do yoga

If I know the virtues of doing yoga, usually my future self will automatically know about them too, so that one isn’t widely applicable. Incentivising doing yoga might be good sometimes, but it sort of suggests that my natural incentives are substantially misaligned with my future self on this, and if that is so, it seems like there are deeper problems e.g. around very high discount rates, that perhaps I should sort out. That is, I’d like it if yoga didn’t just seem appealing because the costs are tomorrow, and I don’t care about tomorrow. Nonetheless, to some extent this is why yoga is appealing, and incentives can help align interests (especially if present me pays for the incentives, instead of stealing from some other future self).

The remaining options—make yoga known, salient, and easy—might be summarised as causing my future self to have an affordance for yoga. They might collectively be achieved by going to yoga once, so that I know where it is and what it involves, and have already paid some of the logistical costs. Also, I will gain a concrete sense of what yoga does that can pop up if I want that kind of thing. My guess is that I should do this kind of thing more often, and that I mostly don’t because I don’t have so much of an affordance for it as I do for making commitments. I haven’t actually tried this a lot however, so I’m not sure how often replacing commitments with affordances is good. It does seems likely good to at least notice that there are often alternatives to commitments, for when you are trying to have a causal influence on your future behavior.

One place I have tried this more is in social engagements. Replacing commitments with affordances is part of the motivation for things like the Berkeley Schelling Point (a regular time and cafe at which people can go if they want to hang out), a breakfast club that I’m part of, and my ‘casual social calendar’, in which I write things I’m doing anyway for which I’d be happy for company (e.g. going to the gym) so that my friends can join me if they feel like it. These have varying levels of overall success, but I think they are all better than higher commitment versions of them would be.

Thanks to Ben Hoffman for conversation that inspired this post.