I wrote most of this post last year after spending several days at EA Global, a conference for Effective Altruists. I just went to this year’s, which reminds me I should post it. In good news, I am not writing about how to get around to things.
Many Effective Altruists are trying to do things that are really good with their lives, and EA Global can be an exciting time for hashing out which things those should be. Which brings to the fore some issues with doing this.
It is hard to have good plans for creating large amounts of value in the world, especially if you are an ignorant young person. Happily, lots of people know different things to one another, including many that are relevant to the probable success of everyone’s plans. Unhappily, exchanging this information can be somewhat fraught.
How things go wrong
1. The attack
One kind of bad thing that can happen is Ann says ‘I am starting a student group for minority square dancers because I think it will change the values of the future artificial arachnid civilization that arises in our place’ and everyone else thinks, ‘Oh boy, I know the correct answer to this one’ and they all jump in and helpfully explain that Ann’s plan is terrible in almost every way, and Ann is sad, and maybe thinks she shouldn’t hang out with these people anyway.
There is also a much milder version of this, where Ann says ‘I am starting a student group for open borders’ and Ben argues that open borders might not be that good, and that he thinks it probably isn’t the best thing to fight for. And Ann agrees in principle that it is good to discuss these things, but she admittedly still feels kind of sad about Ben’s willingness to criticize and oppose her, and it makes their friendship a bit worse.
2. The polite sidestep
I am not sure if #1 actually happens. Probably the milder version does, but I don’t remember seeing it much. A different kind of bad thing seems more common. Clara says ‘I’m going to work at Startup.io and earn money to give to my favorite EA cause—Raising Awareness About Goodness—and at the same time, I’ll be directly improving the lives of thousands of children somehow’. And Dora and Eloise and Freda would not have this plan themselves in a fit, because they can see many reasons it probably won’t work. They have heard bad things about Startup.io, and they think RAAG is going to receive more money than it knows what to do with from other sources and is bottlenecked on talent more, and they can think of numerous impediments to the bettering of thousands of children’s lives occurring amidst all this. However they are polite and friendly, so they don’t say any of that, and instead say ‘Oh, um, great, that’s very virtuous of you’ and ask if she is looking forward to having a shorter commute.
3. The inadvertent personal question
I saw a third kind of bad outcome at EA Global last time, which was sometimes my own fault. My conversation partner would have views on what it would be strategic for me to do. For instance, to give a talk at EA Global. (I was in a good position to decide to give a talk at the last minute, because there was one in the program with my name on it). So they would suggest this. But my reasons for not giving a talk were not just a person-neutral evaluation of giving talks. They included relatively personal considerations, such as my then high proclivity to anxiety attacks, and my lack of knowledge about how to gracefully flee an auditorium from an initial position of center stage.
Psychological health issues and embarrassing personal fleeing strategy deficiencies are more personal topics than my conversation partner was probably meaning to be asking me about, so this might have been awkward for them.
Furthermore, they aren’t topics that those people have particular expertise on, so it was also unfortunate if we missed an opportunity for them to give me any unique knowledge they have about why giving talks is generally so great.
This particular case is perhaps not a usual problem, because people are usually more given to polite evasiveness than me. And it it wasn’t so bad. But it is a real example that is my own to discuss.
Here is a fictional example where it is a more central issue. Gretel joins the conversation about Clara’s move to Startup.io late. She says that while many people should be thrilled to be at Startup.io, she thinks Clara is especially talented and could do better. She suggests Clara consider doing good more directly, perhaps in biotechnology. She suggests some specific places where she knows people.
In fact, Clara agrees that there are probably higher value careers in the abstract, and that biotechnology would be a much better fit for some sort of stylized Clara. But she wants to be at Startup.io much more than she wants to try to start a career in biotechnology, for several reasons. Firstly, she feels like she can’t do anything at all except pass tests, and longs for even just a few months where she gets to put in effort and have undeniable value come out. Relatedly, she would like her family to know she has this power, even if she ultimately chooses not to use it. Third, she knows her husband is unhappy here, and so they might want to move, and the Startup job is fine to do for a short time, and allows for remote work. Plus, she sort of hates being around everyone in biotech—including some people in this room—for reasons she can’t put a finger on. And if she has to spend every day being told what to do by them, then she suspects she will also spend every day imagining killing them, which will be a distraction, and also kind of worrying.
Probably none of these things will make for mutually enjoyable or informative conversation, since Clara already knows more about these issues than the others, and while the others would not be bored hearing some of the human-interest details, their enjoyment would probably be at the expense of Clara’s, especially in the overall context of defending her apparently poor choices.
In general, people’s decisions about what to do with their lives factor in many details of their lives, which are often personal. If you ask why a person is doing something that doesn’t seem effective to you, the answer can easily be ‘I am not motivated to do anything unless it is located within five miles of this particular human’, or ‘I find moral motivations utterly demoralizing, so I really have to do something that I don’t consider overwhelmingly important, and then redirect earnings to something good after the fact’ or ‘Thinking about the most effective thing is stressful and hard, so I can only do it for an hour per month, and I’m up to the bit where I give some money to a reasonable charity’. Which are perhaps interesting topics for discussion if both parties want to, but since they sometimes don’t—either because it’s uncomfortable, or because there are more interesting things to discuss—it can be bad to end up there accidentally.
I think people often recognize that it is bad to end up in these places accidentally, which often takes them back to a different version of #2.
Of these problems, my guess is that #2 is by far the most directly destructive problem for the world, but that #2 is largely caused by fear of landing in #1 or #3, which are locally socially destructive. So it seems worth addressing fear of #1 and #3, possibly by addressing #1 and #3.
I have one speculative suggestion for this kind of situation, which I haven’t actually tested much, but I keep forgetting to, so I’m just going to suggest it: debate beliefs, not actions.
For example, if your conversation partner says they are going to grad school in philosophy, and you think this is a bad idea, instead of saying ‘that sounds ineffective’ or ‘oh, cool, philosophy is interesting’, you should ask about the factual questions that you think you and they might disagree on. For instance, ‘do you think there are high impact questions to answer in philosophy?’ or ‘do you think there will be jobs available in it, or is it more of a high-risk high-reward bet?’
This should help in avoiding #1 because debating a question that is indirectly related to the other person’s choices is less aggressive than debating their choices. And it should help in avoiding #3 because asking about one of the general facts informing their choice, rather than their choice, removes the need to bring up the other more personal facts informing it. My hope is that this makes it possible to talk about such things at all with fewer costs, and so to avoid #2.
On top of that, it means you get to talk more about factual questions of broad relevance, which are arguably more valuable to establish truth about. For instance, it is better to know whether giving talks tends to be helpful in general than it is to establish whether I personally will be able to give a good talk today.
This seems like an object-level procedure for implementing a model-level thing I’ve gotten some good results from and wish more people would try out.
The right response to presumed-good-faith disagreement is trying to do a model-compare that efficiently finds important unshared information. If you’re not asking questions in a way that could help you notice that the other person knows things you don’t, then you almost certainly are not doing an efficient search.
The investigation doesn’t have to be perfectly symmetrical – for instance, if I’m talking with a biologist about biological research, it’s unlikely that I am in the possession of relevant facts about biology that they lack. But even so, if we disagree, they should often have some uncertainty about exactly what the root of the disagreement is. (E.g. I might be skeptical of some animals’ moral patienthood not because I think that as a matter of biological fact they don’t have nociception, but because I have the philosophical position that nociception isn’t enough.)
My guess is that if people actually manage to remember this bit of model uncertainty and act from it, they’ll automatically generate the correct social moves.
Pingback: Rational Feed – deluks917
I think #2 has developed as a social norm because of the personal incentives involved. If you avoid saying things the other person won’t like, then you’re protecting the relationship and therefore achieving the most benefit for yourself. If you instead say what you’re really thinking, there might be some benefit for the other person if they maturely process it. But, more likely, they will resent you a little for it, leaving little incentive for you to speak your mind. This might be why it’s often close friends, family, or mentors that give uncomfortable advice, since these relationships are less likely to be compromised by the discomfort.
I like that the EA community has partly overcome this type of social norm, so truth can be discovered with less risk to relationships. Personally, I’ve never been much good at resisting the temptation to speak my mind! But it’s worth remembering that the EA norm of constructive criticism and speaking one’s mind doesn’t often extend to the rest of the world we are trying to influence. I’ve certainly dealt with people who perceive any criticism, constructive or not, as an attack.