Prosocial manipulation

There is an axis of social calculativeness: whether your speech and social actions were carefully designed for particular outcomes, versus being instinctive responses to the situation.

This is related to an axis of honesty: whether your words represent your actual state. I suppose because the words most likely to produce the best response naively are often not true. Though I’m not sure if this is reliably true: feelings in the moment are often misleading, and honesty is often prudent.

Another axis is selfishness versus pro-socialness: whether your actions are meant to produce good outcomes for you (potentially at the expense of others) or a larger group such as the world.

The calculativeness axis seems widely expected match the selfishness axis well. Manipulative people are bad. I don’t see why they should go together though, in theory. You can say what you feel like in conversation, or say things calculated to achieve goals. Shouldn’t people saying things to achieve goals do so for all kinds of goals, many venerable? In about the same distribution as people doing other things to achieve goals?

A natural question is whether calculated behavior really is reliably selfish, or whether people just feel like it is for some reason. I can think of cases where it isn’t selfish. For instance, a diplomat trying to arrange peace is probably choosing their words very carefully, and with regard to consequences. But it is hard to say how rare those are.

Perhaps we just don’t think of that as being calculative? Or I wonder if we do, and while we like it if peace is arranged, we would still be somewhat wary of a very good diplomat in our own dealings with them. Because even if they are acting for the good of the world, we suspect that it won’t be for our good, if we are the one being calculated about.

After all, we are presumably being led away from whatever our default choice would have been after hearing the person just represent their internal state as came naturally. And moving away from that sounds probably worse, so more likely that manipulation means to exploit us somehow than to secretly help us get an even better outcome. This is closely related to the honesty axis, and would mean ‘manipulative’ doesn’t really imply ‘globally consequentially bad’ so much as ‘dangerous to deal with’.

I am speculating. Are there common positive connotation terms for ‘socially manipulative’ or ‘calculating’? Is that a thing people do?

For signaling? (Part I)

 

Your T-shirt is embarrassing. Have you considered wearing a less embarrassing T-shirt?

You are suggesting I spend my precious time trying to look good. Well I am good, and so I’m not going to do that. Because signaling is bad. You can tell something is bad when the whole point of it is to have costs. Signaling is showing off. Signaling benefits me at someone else’s equal expense. I won’t wear a less embarrassing T-shirt because to Hell with signaling.

Hmm. That seems wrong. Signaling is about honest communication when the stakes are high—which is often important! And just because it’s called ‘costly’ doesn’t mean it is meant to have costs. It only has to be too costly for liars, and if it’s working then they won’t be doing any signaling anyway. ‘Costly signals’ can be very cheap for those who use them. I think signaling is often wonderful for society.

Give me three examples where it is ‘wonderful’.

Driver’s licences. Showing a driver’s licence is a costly signal of being a decent driver, which communicates something useful honestly, is cheap for the people who are actually good drivers, and lets the rest of society distinguish people who are likely to drive safely from people who are not, which is amazingly great.

Driving tests don’t seem that cheap to me, but I’ll grant that they are probably worth it. Still, this seems like a strange corner case of ‘signaling’ that was explicitly designed by humans. It fits the economic definition of ‘costly signaling’ but if you have to go that far from the central examples to find something socially beneficial, that doesn’t increase my regard for signaling. Next?

One of the most famous examples of signaling is in the job market. Potential candidates show a hirer their qualifications, which allows the hirer to employ more appropriate candidates. You might disagree about whether all of the signals that people use are socially optimal—for instance if education is mostly for signaling, it seems fairly destructive, because it is so expensive. But you must agree that companies do a lot better hiring the people they choose than they would hiring random people they would get if good candidates couldn’t signal their quality. And at least many aspects of the interview process are cheap enough to be totally worth it. For instance, being able to have a polite and friendly conversation about the subject matter.

Of course companies are better off—companies aren’t the people destroying years of their productive lives on deliberately arduous fake work. Or learning a lot of irrelevant but testable skills. Or degrading themselves and society with faux friendliness. And you ignore some other key details, like what the actual alternative would realistically look like. But let’s not go into it—I’ll grant you that hiring probably goes better overall than it would with zero signaling and no replacement, even though the signaling is awful. And more importantly, that the the whole of society on net is probably best off with some kind of signaling there. I don’t know of a good replacement.

Ok, great. So, third—T-shirts. T-shirts signal personality traits. It is free to wear any T-shirt you want, but T-shirts are still costly signals in a sense, because if you aren’t a punk you won’t  know which T-shirt to wear to look like a genuine punk. And if you don’t like ABBA it is more costly for you to wear an ABBA t-shirt than it is for someone who does like them, because you’ll be embarrassed or unhappy at the association. And if you have bad taste, it is hard to know which T-shirt would indicate good taste. This all seems good, because it lets people cheaply find other people with similar interests, and also to learn facts about the people around them, regardless of similarity. Which is why it is socially destructive for you to wear that T-shirt— your taste can’t be that bad, so you are basically lying.

Ok, a fourth: how about when a friend is sick, and you make them tea and soup and put on a movie for them. This is a costly signal that you care about them, or at least  about your continuing friendship with them. Because it is effort for you with no reward if you don’t care much, and are looking to scale down the relationship soon. But aside from the signaling, this is probably a net social benefit—your friend gets soup and tea and a movie at a time when they could especially use them. Plus, feeling cared for instead of uncared for is a real benefit.

Ok, I concede that costly signaling can be honest, cheap, and on net socially beneficial. But I still think it usually isn’t! And I’m not sure how far we can get thinking about specific examples, since there are so many.

Ok, what do you propose?

Talking about our overall impressions. The big picture. Here is mine: the world is full of people pouring real wealth into things whose only use is to be rubbed in the face of those who can’t afford to destroy so much value. Where it isn’t even good for society to be able to distinguish the signalers from the rest.  Letting everyone see who is rich and who is poor, who is socially competent and who is not, who is beautiful, who is smart, who can win at things that only exist to be won at—does this really lead to a great world?

There is much signaling that the world would be better off without. I admit I don’t really know what the balance of good and bad is like. But I disagree that we should be talking about signaling overall. Or even what is best for the world in this particular case. You are not the world. Even signaling that is terrible for the world is often good for you. If you are in a zero-sum game, and you are more worthy than the opponent, then do your best to win! And if you aren’t, then be more worthy!

What if I want what is best for society?

Even then, you don’t serve society by failing at signaling. Just because people fighting to look good is costly for society doesn’t mean that society gains anything by you intentionally losing that fight. If you are directing your resources to society, then it is better for society if you win. Often better enough to warrant the costs of playing. Serve society by winning at signaling and donating the proceeds to society. Wear a well ironed suit. Don’t talk about your erotic porcelain dinosaur collection. Go to university. Try to exercise good taste…

I agree, at least often. But I think you believe in a heuristic that says you should signal about as much and in similar ways as if you were selfish. Because you are on the side of good, so protecting yourself is protecting the good. You see people looking weird and embarrassing themselves in the name of caring about something, and you think they are failing at signaling. And that’s wrong.

Yeah, I guess you should signal a tiny bit less on the margin, in cases where signaling is socially destructive. But it’s such a small thing, I’m not sure it is worth thinking about.

I don’t mean that. Your selfish interests can come apart from society’s interests almost entirely, in signaling. As an extreme case, imagine that you became confident that by far the best cause for improving the world was promoting incest. From a selfish perspective, you probably don’t want to look like you are promoting incest, because there are few worse ways to look in modern society. But from an altruistic perspective, supposing that you were right about incest, it may well be best for you to promote it, because it would do so much for making incest look better, at just the cost of your own reputation.

You should distinguish between wearing a clean shirt—good for your cause—and wearing a shirt that is more respectable because it is not about your cause—which is often bad for your cause. You can’t just use ‘looking good’ as a heuristic, even though it is generally good for your cause when its proponents look good.

That’s an interesting point, and I hadn’t really thought about it. But surely that’s pretty rare. There are systematic reasons that it’s unlikely that there is some cause which is radically more important than any other, and is completely politically unpalateable.

I agree that’s unlikely—just brought it up as a clear example of it being not worth looking good. I think this issue is maybe ubiquitous though, in less clear and extreme cases. For instance, everywhere sophisticated people play it cool, withholding enthusiasm from ideas until they no longer lack enthusiasm, polishing their own image at the expense of the very projects they are most excited about, or would be if they deigned to experience excitement.

A bold claim—I am curious to hear two more examples, but I have a lot of signaling to get done this evening. Same time next week?

Most likely. I hope you are correctly identified as the superior type in all of your endeavors.

Impression track records

It is good to separate impressions from beliefs.

It is good to keep track records.

Is it good to keep separate impression and belief track records?

My default guess would be ‘a bit, but probably too much effort, since we hardly manage to keep any track records.’

But it seems maybe more than a bit good, for these reasons:

  1. Having good first impressions, and being good at turning everyone’s impressions into a good overall judgment might be fairly different skills, so that some people are good at one and some are good at the other, and you get a clearer signal if you separate them.
  2. We probably by default mostly learn about beliefs and not impressions, because by assumption if I have both and they are different, I suspect the impression is wrong, and so will make me look worse if I advertise that I hold it.
  3. Impressions are probably better than beliefs to have track records for, because the point of the track records is to know how much to weight to give different sources when constructing beliefs, and it is more straightforward to know directly which sources are good than to know which aggregations of sources are good (especially if they are mostly bad, because nobody has track records).

As in, perhaps we mostly keep belief track records when we keep track records, but would do better with impression track records. What would we do if we wanted to keep impression track records instead? (Do we already?)

What you can’t say to a sympathetic ear

Suppose we live in a society where it is strongly frowned upon to believe that an onion is a fruit. It is ok to disagree about what defines ‘fruit’, or what Allium varieties are onions. But none of this will get you off the hook—you had just better not suggest that an onions is a fruit.

You don’t think about the issue much yourself. If you had to, you would probably agree with the consensus view that the onion is not a fruit, given a few clarifications of the question. If you were allowed, you would probably admit that you don’t care much about the question. And that you would kind of prefer that it was possible to discuss the issue calmly and without accusations of transcendent evil.

However none of these things is relevant in the real world, because you daren’t even advocate for calmer consideration of the onion classification issue. People would infer that (in a sense) you don’t want to punish people who say that an onion is a fruit. And (in a different sense) not punishing it is much like endorsing it. Punishing non-punishers is an important part of cooperation.

Now suppose you and I are chatting over lunch in a work cafeteria, and I glance furtively around and then lean over to you with gleaming eyes, and whisper that I am making fruit soup tonight, and ahem, there are many people who would cry if they watched me cutting up the fruit for it.

You see what I’m saying. Nobody else seems to have heard.  Are you annoyed? Do you think worse of me?

My guess is yes, at least quite plausibly.

And you are not annoyed because you find my comments troubling in their own right. You disagree with them, but don’t find them intrinsically offensive. Outside the context of our society, you wouldn’t mind.

You might be offended that I am willing to suggest an onion is a fruit in your presence in spite of knowing that most people would be unhappy about this. But suppose that we know each other well, and you know I know you are hard to hurt, even with grievous categorization errors.

I think you still have a strong reason to be annoyed. Which is that I am intentionally taking an action that the rest of the world thinks you are strongly obliged to punish—for instance, by threatening to stop associating with me unless I have an amazing excuse for what kind of seizure took over my mouth. Which means you must decide on the spot whether to punish me (at a cost to our relationship) or implicitly collude a bit with my renegade controversial-thing-saying faction. At a cost to your relationship with the world, because if they learned of this, they would hate you.

This makes my implied classification of onions as fruit into an ultimatum: ‘Me or the rest of society?’ If it is intentional, then it is a test of our friendship, at your expense. It’s like randomly saying ‘ok, if you really care about our friendship then steal $10 from your grandmother to prove it’.

Saying that onions are fruit quietly to you is holding our friendship hostage unless you shift your alliances away from the rest of the world and toward me. Or, more likely, it is an accident that still puts you in this position.

And it is very annoying to have your valuables taken hostage, and even more annoying to be threatened on short notice, with a deadline, so that you can’t just put it at the bottom of your to-do list and deal with it another time.

I hadn’t explicitly noticed that this kind of dynamic existed before (and it may not), but I think it might play a large part in my own feelings, on both sides of situations that are a bit like this.

I am sometimes annoyed when people reveal disagreeable views to me, even if I don’t especially disagree with them. Which is a bit surprising, on the face of it. And other times, I find myself in the position of wanting to say things that may sound controversial, and feeling hesitant, in part for the other person’s sake. So I got to thinking about the possible ways that could harm someone. And imagining myself in their shoes, this is the kind of harm I expected. I have not much idea if others feel the same way in these circumstances, or would construe the situation similarly in terms of game theory.

This might all seem pretty unimportant, being as it is a speculative and hand-wavey analysis of an already obscure social situation. But the existence of multiple reasons to be offended by officially offensive statements—even if you are sympathetic to them—means that social bans on views should be more stable than you might think. It’s one of those things where even if everyone comes to privately believe that onions are indeed fruit, and also thinks that nobody should be punished for saying this, and they can all talk to each other, everyone might still end up saying that onions aren’t fruit forever.

This means sanctions on speech aren’t just costly because they make it hard for individuals to hear ideas that might turn out to be true. They are more costly than that, because even if every individual manages to hear the ideas, and they are good, still they might not be able to update their behavior or the social consensus. And if everyone has to talk and behave as if a claim is false, we have lost a lot of the value of knowing that it is true.

To successfully condemn a view socially is to lock that view in place with a coordination problem. We could all freely identify onions as we wanted, but we have to all at once decide to change the norms, or else we get punished. And changing the norms would be hard to arrange at the best of times, but is harder when trying to arrange it warrants punishment.

If this analysis is correct, I think this situation should raise the bar for condemning views, because it makes it even harder for future people to undo our mistakes where we have erred. Condemnation is more permanent.

ETA Sept 3 2017: I am on reflection happy for people to tell me their controversial views if they are interesting—I bring up my slight feeling of annoyance about it as evidence that it is imposing some cost. But I am usually willing to pay the cost.

Critiquing other people’s plans politely

I wrote most of this post last year after spending several days at EA Global, a conference for Effective Altruists. I just went to this year’s, which reminds me I should post it. In good news, I am not writing about how to get around to things.

Many Effective Altruists are trying to do things that are really good with their lives, and EA Global can be an exciting time for hashing out which things those should be. Which brings to the fore some issues with doing this.

It is hard to have good plans for creating large amounts of value in the world, especially if you are an ignorant young person. Happily, lots of people know different things to one another, including many that are relevant to the probable success of everyone’s plans. Unhappily, exchanging this information can be somewhat fraught.

How things go wrong

1. The attack

One kind of bad thing that can happen is Ann says ‘I am starting a student group for minority square dancers because I think it will change the values of the future artificial arachnid civilization that arises in our place’ and everyone else thinks, ‘Oh boy, I know the correct answer to this one’ and they all jump in and helpfully explain that Ann’s plan is terrible in almost every way, and Ann is sad, and maybe thinks she shouldn’t hang out with these people anyway.

There is also a much milder version of this, where Ann says ‘I am starting a student group for open borders’ and Ben argues that open borders might not be that good, and that he thinks it probably isn’t the best thing to fight for. And Ann agrees in principle that it is good to discuss these things, but she admittedly still feels kind of sad about Ben’s willingness to criticize and oppose her, and it makes their friendship a bit worse.

2. The polite sidestep

I am not sure if #1 actually happens. Probably the milder version does, but I don’t remember seeing it much. A different kind of bad thing seems more common. Clara says ‘I’m going to work at Startup.io and earn money to give to my favorite EA cause—Raising Awareness About Goodness—and at the same time, I’ll be directly improving the lives of thousands of children somehow’. And Dora and Eloise and Freda would not have this plan themselves in a fit, because they can see many reasons it probably won’t work. They have heard bad things about Startup.io, and they think RAAG is going to receive more money than it knows what to do with from other sources and is bottlenecked on talent more, and they can think of numerous impediments to the bettering of thousands of children’s lives occurring amidst all this. However they are polite and friendly, so they don’t say any of that, and instead say ‘Oh, um, great, that’s very virtuous of you’ and ask if she is looking forward to having a shorter commute.

3. The inadvertent personal question

I saw a third kind of bad outcome at EA Global last time, which was sometimes my own fault. My conversation partner would have views on what it would be strategic for me to do. For instance, to give a talk at EA Global. (I was in a good position to decide to give a talk at the last minute, because there was one in the program with my name on it). So they would suggest this. But my reasons for not giving a talk were not just a person-neutral evaluation of giving talks. They included relatively personal considerations, such as my then high proclivity to anxiety attacks, and my lack of knowledge about how to gracefully flee an auditorium from an initial position of center stage.

Psychological health issues and embarrassing personal fleeing strategy deficiencies are more personal topics than my conversation partner was probably meaning to be asking me about, so this might have been awkward for them.

Furthermore, they aren’t topics that those people have particular expertise on, so it was also unfortunate if we missed an opportunity for them to give me any unique knowledge they have about why giving talks is generally so great.

This particular case is perhaps not a usual problem, because people are usually more given to polite evasiveness than me. And it it wasn’t so bad. But it is a real example that is my own to discuss.

Here is a fictional example where it is a more central issue. Gretel joins the conversation about Clara’s move to Startup.io late. She says that while many people should be thrilled to be at Startup.io, she thinks Clara is especially talented and could do better. She suggests Clara consider doing good more directly, perhaps in biotechnology. She suggests some specific places where she knows people.

In fact, Clara agrees that there are probably higher value careers in the abstract, and that biotechnology would be a much better fit for some sort of stylized Clara. But she wants to be at Startup.io much more than she wants to try to start a career in biotechnology, for several reasons. Firstly, she feels like she can’t do anything at all except pass tests, and longs for even just a few months where she gets to put in effort and have undeniable value come out. Relatedly, she would like her family to know she has this power, even if she ultimately chooses not to use it. Third, she knows her husband is unhappy here, and so they might want to move, and the Startup job is fine to do for a short time, and allows for remote work. Plus, she sort of hates being around everyone in biotech—including some people in this room—for reasons she can’t put a finger on. And if she has to spend every day being told what to do by them, then she suspects she will also spend every day imagining killing them, which will be a distraction, and also kind of worrying.

Probably none of these things will make for mutually enjoyable or informative conversation, since Clara already knows more about these issues than the others, and while the others would not be bored hearing some of the human-interest details, their enjoyment would probably be at the expense of Clara’s, especially in the overall context of defending her apparently poor choices.

In general, people’s decisions about what to do with their lives factor in many details of their lives, which are often personal. If you ask why a person is doing something that doesn’t seem effective to you, the answer can easily be ‘I am not motivated to do anything unless it is located within five miles of this particular human’, or ‘I find moral motivations utterly demoralizing, so I really have to do something that I don’t consider overwhelmingly important, and then redirect earnings to something good after the fact’ or ‘Thinking about the most effective thing is stressful and hard, so I can only do it for an hour per month, and I’m up to the bit where I give some money to a reasonable charity’. Which are perhaps interesting topics for discussion if both parties want to, but since they sometimes don’t—either because it’s uncomfortable, or because there are more interesting things to discuss—it can be bad to end up there accidentally.

I think people often recognize that it is bad to end up in these places accidentally, which often takes them back to a different version of #2.

Proposed improvement

Of these problems, my guess is that #2 is by far the most directly destructive problem for the world, but that #2 is largely caused by fear of landing in #1 or #3, which are locally socially destructive. So it seems worth addressing fear of #1 and #3, possibly by addressing #1 and #3.

I have one speculative suggestion for this kind of situation, which I haven’t actually tested much, but I keep forgetting to, so I’m just going to suggest it: debate beliefs, not actions.

For example, if your conversation partner says they are going to grad school in philosophy, and you think this is a bad idea, instead of saying ‘that sounds ineffective’ or ‘oh, cool, philosophy is interesting’, you should ask about the factual questions that you think you and they might disagree on. For instance, ‘do you think there are high impact questions to answer in philosophy?’ or ‘do you think there will be jobs available in it, or is it more of a high-risk high-reward bet?’

This should help in avoiding #1 because debating a question that is indirectly related to the other person’s choices is less aggressive than debating their choices. And it should help in avoiding #3 because asking about one of the general facts informing their choice, rather than their choice, removes the need to bring up the other more personal facts informing it. My hope is that this makes it possible to talk about such things at all with fewer costs, and so to avoid #2.

On top of that, it means you get to talk more about factual questions of broad relevance, which are arguably more valuable to establish truth about. For instance, it is better to know whether giving talks tends to be helpful in general than it is to establish whether I personally will be able to give a good talk today.