Doing things in a populated world

The world has lots of people and things in it. And they are organized in such a mishmash that changing a thing will often make large numbers of people better off or worse off. And for a big thing—even a very good big thing—the number who are worse off is very unlikely to be zero.

This means that if you want to do big things, you will either have to make some people worse off, or rearrange the gains to make everyone better off.

If there are only a small number of people involved, you might be able to make everyone better off with a careful choice of things to change. But if the group is large, you will probably need some sort of generic value fluid, that can flow between the parties and fill in the holes such as to make everyone a bit better off, instead of some people much better off and some people worse off. Money and social respect both fill this role, assuming that there aren’t other impediments to using them, but a giant barrel of compensatory apricots might also work.

This suggests that whether big changes are made depends on the availability of workable value fluid, along with the propensity of the powerful to make the less powerful worse off without compensation. The availability of workable value fluid might for instance change according to social or technical technology for maintaining it, as well as impediments to using that technology.

For instance, if a large group of people were already headed to restaurant A, but the group would on net prefer restaurant B, they might not make this switch, because someone who prefers B would have to raise the issue, and it would feel a bit too much like conflict (and annoyance of extra negotiation for everyone). However if a couple of the people who prefer B actually own B and can offer drinks on the house to the group—and that is enough for everyone to prefer B, including the B owners—the switch can happen more easily. (I’m really thinking of things like shifts in legislation or giant infrastructure projects, but much more of my own experience is with groups going to restaurants.)

Is this right? Is it a big factor? (Theoretically salient mechanisms can be pretty minor in the real world.)

Halloween

Leonard Cohen on his poetic voice:

“As I grew older, I understood that instructions came with this voice. What were these instructions? The instructions were never to lament casually. And if one is to express the great inevitable defeat that awaits us all, it must be done within the strict confines of dignity and beauty.”

This is apparently exactly the opposite of how most people in the American world feel at this time of year. The great inevitable defeat should be expressed with electronically jeering plastic skeletons, humorous fake corpses, faux-threatening gravestones, and—let’s not forget the violence and disease responsible for so many deaths—generous fake blood, vomit, and slime. The celebration should be not only frivolous and ugly to the max, but really go hard on minimizing dignity! Don’t just dress as a fun ugly corpse, make it a sexually depraved fun ugly corpse!

Isn’t this just a bit weird? For instance, how is this good? 

I’ve heard a couple of times something like: “Halloween is nice because it is rebellious, and a relief from all that seriousness. People usually treat these things with sacredness and somberness, and it’s a lot to deal with.”

If that’s what it is, would it also be cool if we incorporated more second-rate stand up comedy routines and pranks into funerals? Or had more fun smallpox themed parties?

I don’t think people do actually contend with death or horror in a more comfortable way via Halloween, for the most part. My guess is that they basically don’t think about the content of what they are doing, and just get used to hunting for bargain plastic corpse babies to put on their lawns, and laughing with each other about how realistic and gruesome they are. Which seems more like desensitization than coming to terms with a thing. Which I doubt is a good kind of relief from seriousness. 

Also, if we are going to have a change of mood break around the potential horrors of life, the alternate feelings and purposes Halloween suggests just don’t seem very good. ‘Trick or treat?’ Cheap malice or cheap hedonism? Quick, find a huge mound of junk food, I forgot because I’m very competitive about drunkenly parading my buttocks in a way that makes me seem clever..

I’m obviously missing something, and I don’t actually blame anyone much within reason for getting into the holidays of their culture, but come on culture, what is this?

Strong stances

I. The question of confidence

Should one hold strong opinions? Some say yes. Some say that while it’s hard to tell, it tentatively seems pretty bad (probably). There are many pragmatically great upsides, and a couple of arguably unconscionable downsides. But rather than judging the overall sign, I think a better question is, can we have the pros without the devastatingly terrible cons?

A quick review of purported or plausible pros:

  1. Strong opinions lend themselves to revision:
    1. Nothing will surprise you into updating your opinion if you thought that anything could happen. A perfect Bayesian might be able to deal with myriad subtle updates to vast uncertainties, but a human is more likely to notice a red cupcake if they have claimed that cupcakes are never red. (Arguably—some would say having opinions makes you less able to notice any threat to them. My guess is that this depends on topic and personality.)
    2. ‘Not having a strong opinion’ is often vaguer than having a flat probability distribution, in practice. That is, the uncertain person’s position is not, ‘there is a 51% chance that policy X is better than policy -X’, it is more like ‘I have no idea’. Which again doesn’t lend itself to attending to detailed evidence.
    3. Uncertainty breeds inaction, and it is harder to run into more evidence if you are waiting on the fence, than if you are out there making practical bets on one side or the other.
  2. (In a bitterly unfair twist of fate) being overconfident appears to help with things like running startups, or maybe all kinds of things.
    If you run a startup, common wisdom advises going around it saying things like, ‘Here is the dream! We are going to make it happen! It is going to change the world!’ instead of things like, ‘Here is a plausible dream! We are going to try to make it happen! In the unlikely case that we succeed at something recognizably similar to what we first had in mind, it isn’t inconceivable that it will change the world!’ Probably some of the value here is just a zero sum contest to misinform people into misinvesting in your dream instead of something more promising. But some is probably real value. Suppose Bob works full time at your startup either way. I expect he finds it easier to dedicate himself to the work and has a better time if you are more confident. It’s nice to follow leaders who stand for something, which tends to go with having at least some strong opinions. Even alone, it seems easier to work hard on a thing if you think it is likely to succeed. If being unrealistically optimistic just generates extra effort to be put toward your project’s success, rather than stealing time from something more promising, that is a big deal.
  3. Social competition
    Even if the benefits of overconfidence in running companies and such were all zero sum, everyone else is doing it, so what are you going to do? Fail? Only employ people willing to work at less promising looking companies? Similarly, if you go around being suitably cautious in your views, while other people are unreasonably confident, then onlookers who trust both of you will be more interested in what the other people are saying.
  4. Wholeheartedness
    It is nice to be the kind of person who knows where they stand and what they are doing, instead of always living in an intractable set of place-plan combinations. It arguably lends itself to energy and vigor. If you are unsure whether you should be going North or South, having reluctantly evaluated North as a bit better in expected value, for some reason you often still won’t power North at full speed. It’s hard to passionately be really confused and uncertain. (I don’t know if this is related, but it seems interesting to me that the human mind feels as though it lives in ‘the world’—this one concrete thing—though its epistemic position is in some sense most naturally seen as a probability distribution over many possibilities.)
  5. Creativity
    Perhaps this is the same point, but I expect my imagination for new options kicks in better when I think I’m in a particular situation than when I think I might be in any of five different situations (or worse, in any situation at all, with different ‘weightings’).

A quick review of the con:

  1. Pervasive dishonesty and/or disengagement from reality
    If the evidence hasn’t led you to a strong opinion, and you want to profess one anyway, you are going to have to somehow disengage your personal or social epistemic processes from reality. What are you going to do? Lie? Believe false things? These both seem so bad to me that I can’t consider them seriously. There is also this sub-con:

    1. Appearance of pervasive dishonesty and/or disengagement from reality
      Some people can tell that you are either lying or believing false things, due to your boldly claiming things in this uncertain world. They will then suspect your epistemic and moral fiber, and distrust everything you say.
  2. (There are probably others, but this seems like plenty for now.)

II. Tentative answers

Can we have some of these pros without giving up on honesty or being in touch with reality? Some ideas that come to mind or have been suggested to me by friends:

1. Maintain two types of ‘beliefs’. One set of play beliefs—confident, well understood, probably-wrong—for improving in the sandpits of tinkering and chatting, and one set of real beliefs—uncertain, deferential—for when it matters whether you are right. For instance, you might have some ‘beliefs’ about how cancer can be cured by vitamins that you chat about and ponder, and read journal articles to update, but when you actually get cancer, you follow the expert advice to lean heavily on chemotherapy. I think people naturally do this a bit, using words like ‘best guess’ and ‘working hypothesis’.

I don’t like this plan much, though admittedly I basically haven’t tried it. For your new fake beliefs, either you have to constantly disclaim them as fake, or you are again lying and potentially misleading people. Maybe that is manageable through always saying ‘it seems to me that..’ or ‘my naive impression is..’, but it sounds like a mess.

And if you only use these beliefs on unimportant things, then you miss out on a lot of the updating you were hoping for from letting your strong beliefs run into reality. You get some though, and maybe you just can’t do better than that, unless you want to be testing your whacky theories about cancer cures when you have cancer.

It also seems like you won’t get a lot of the social benefits of seeming confident, if you still don’t actually believe strongly in the really confident things, and have to constantly disclaim them.

But I think I actually object because beliefs are for true things, damnit. If your evidence suggests something isn’t true, then you shouldn’t be ‘believing’ it. And also, if you know your evidence suggests a thing isn’t true, how are you even going to go about ‘believing it’? I don’t know how to.

2. Maintain separate ‘beliefs’ and ‘impressions’. This is like 1, except impressions are just claims about how things seem to you. e.g. ‘It seems to me that vitamin C cures cancer, but I believe that that isn’t true somehow, since a lot of more informed people disagree with my impression.’ This seems like a great distinction in general, but it seems a bit different from what one wants here. I think of this as a distinction between the evidence that you received, and the total evidence available to humanity, or perhaps between what is arrived at by your own reasoning about everyone’s evidence vs. your own reasoning about what to make of everyone else’s reasoning about everyone’s evidence. However these are about ways of getting a belief, and I think what you want here is actually just some beliefs that can be got in any way. Also, why would you act confidently on your impressions, if you thought they didn’t account for others’ evidence, say? Why would you act on them at all?

3. Confidently assert precise but highly uncertain probability distributions “We should work so hard on this, because it has like a 0.03% chance of reshaping 0.5% of the world, making it a 99.97th percentile intervention in the distribution we are drawing from, so we shouldn’t expect to see something this good again for fifty-seven months.” This may solve a lot of problems, and I like it, but it is tricky.

4. Just do the research so you can have strong views. To do this across the board seems prohibitively expensive, given how much research it seems to take to be almost as uncertain as you were on many topics of interest.

5. Focus on acting well rather than your effects on the world. Instead of trying to act decisively on a 1% chance of this intervention actually bringing about the desired result, try to act decisively on a 95% chance that this is the correct intervention (given your reasoning suggesting that it has a 1% chance of working out). I’m told this is related to Stoicism.

6. ‘Opinions’
I notice that people often have ‘opinions’, which they are not very careful to make true, and do not seem to straightforwardly expect to be true. This seems to be commonly understood by rationally inclined people as some sort of failure, but I could imagine it being another solution, perhaps along the lines of 1.

(I think there are others around, but I forget them.)

III. Stances

I propose an alternative solution. Suppose you might want to say something like, ‘groups of more than five people at parties are bad’, but you can’t because you don’t really know, and you have only seen a small number of parties in a very limited social milieu, and a lot of things are going on, and you are a congenitally uncertain person. Then instead say, ‘I deem groups of more than five people at parties bad’. What exactly do I mean by this? Instead of making a claim about the value of large groups at parties, make a policy choice about what to treat as the value of large groups at parties. You are adding a new variable ‘deemed large group goodness’ between your highly uncertain beliefs and your actions. I’ll call this a ‘stance’. (I expect it isn’t quite clear what I mean by a ‘stance’ yet, but I’ll elaborate soon.) My proposal: to be ‘confident’ in the way that one might be from having strong beliefs, focus on having strong stances rather than strong beliefs.

Strong stances have many of the benefits of confident beliefs. With your new stance on large groups, when you are choosing whether to arrange chairs and snacks to discourage large groups, you skip over your uncertain beliefs and go straight to your stance. And since you decided it, it is certain, and you can rearrange chairs with the vigor and single-mindedness of a person who knowns where they stand. You can confidently declare your opposition to large groups, and unite followers in a broader crusade against giant circles. And if at the ensuing party people form a large group anyway and seem to be really enjoying it, you will hopefully notice this the way you wouldn’t if you were merely uncertain-leaning-against regarding the value of large groups.

That might have been confusing, since I don’t know of good words to describe the type of mental attitude I’m proposing. Here are some things I don’t mean by ‘I deem large group conversations to be bad’:

  1. “Large group conversations are bad” (i.e. this is not about what is true, though it is related to that.)
  2. “I declare the truth to be ‘large group conversations are bad’” (i.e. This is not of a kind with beliefs. Is not directly about what is true about the world, or empirically observed, though it is influenced by these things. I do not have power over the truth.)
  3. “I don’t like large group conversations”, or “I notice that I act in opposition to large group conversations” (i.e. is not a claim about my own feelings or inclinations, which would still be a passive observation about the world)
  4. “The decision-theoretically optimal value to assign to large groups forming at parties is negative”, or “I estimate that the decision-theoretically optimal policy on large groups is opposition” (i.e. it is a choice, not an attempt to estimate a hidden feature of the world.)
  5. “I commit to stopping large group conversations” (i.e. It is not a commitment, or directly claiming anything about my future actions.)
  6. “I observe that I consistently seek to avert large group conversations” (this would be an observation about a consistency in my behavior, whereas here the point is to make a new thing (assign a value to a new variable?) that my future behavior may consistently make use of, if I want.)
  7. “I intend to stop some large group conversations” (perhaps this one is closest so far, but a stance isn’t saying anything about the future or about actions—if it doesn’t get changed by the future, and then in future I want to take an action, I’ll probably call on it, but it isn’t ‘about’ that.)

Perhaps what I mean is most like: ‘I have a policy of evaluating large group discussions at parties as bad’, though using ‘policy’ as a choice about an abstract variable that might apply to action, but not in the sense of a commitment.

What is going on here more generally? You are adding a new kind of abstract variable between beliefs and actions. A stance can be a bit like a policy choice on what you will treat as true, or on how you will evaluate something. Or it can also be its own abstract thing that doesn’t directly mean anything understandable in terms of the beliefs or actions nearby.

Some ideas we already use that are pretty close to stances are ‘X is my priority’, ‘I am in the dating market’, and arguably, ‘I am opposed to dachshunds’. X being your priority is heavily influenced by your understanding of the consequences of X and its alternatives, but it is your choice, and it is not dishonest to prioritize a thing that is not important. To prioritize X isn’t a claim about the facts relevant to whether one would want to prioritize it. Prioritizing X also isn’t a commitment regarding your actions, though the purpose of having a ‘priority’ is for it to affect your actions. Your ‘priority’ is a kind of abstract variable added to your mental landscape to collect up a bunch of reasoning about the merits of different things, and package them for easy use in decisions.

Another way of looking at this is as a way of formalizing and concretifying the step where you look at your uncertain beliefs and then decide on a tentative answer and then run with it.

One can be confident in stances, because a stance is a choice, not a guess at a fact about the world. (Though my stance may contain uncertainty if I want, e.g. I could take a stance that large groups have a 75% chance of being bad on average.) So while my beliefs on a topic may be quite uncertain, my stance can be strong, in a sense that does some of the work we wanted from strong beliefs. Nonetheless, since stances are connected with facts and values, my stance can be wrong in the sense of not being the stance I should want to have, on further consideration.

In sum, stances:

  1. Are inputs to decisions in the place of some beliefs and values
  2. Integrate those beliefs and values—to the extent that you want them to be—into a single reusable statement
  3. Can be thought of as something like ‘policies’ on what will be treated as the truth (e.g. ‘I deem large groups bad’) or as new abstract variables between the truth and action (e.g. ‘I am prioritizing sleep’)
  4. Are chosen by you, not implied by your epistemic situation (until some spoilsport comes up with a theory of optimal behavior)
  5. therefore don’t permit uncertainty in one sense, and don’t require it in another (you know what your stance is, and your stance can be ‘X is bad’ rather than ‘X is 72% likely to be bad’), though you should be uncertain about how much you will like your stance on further reflection.

I have found having stances somewhat useful, or at least entertaining, in the short time I have been trying having them, but it is more of a speculative suggestion with no other evidence behind it than trustworthy advice.

Worth keeping

(Epistemic status: quick speculation which matches my intuitions about how social things go, but which I hadn’t explicitly described before, and haven’t checked.)

If your car gets damaged, should you invest more or less in it going forward? It could go either way. The car needs more investment to be in good condition, so maybe you do that. But the car is worse than you thought, so maybe you start considering a new car, or putting your dollars into Uber instead.

If you are writing an essay and run into difficulty describing something, you can put in additional effort to find the right words, or you can suspect that this is not going to be a great essay, and either give up, or prepare to get it out quickly and imperfectly, worrying less about the other parts that don’t quite work.

When something has a problem, you always choose whether to double down with it or to back away.

(Or in the middle, to do a bit of both: to fix the car this time, but start to look around for other cars.)

I’m interested in this as it pertains to people. When a friend fails, do you move toward them—to hold them, talk to them, pick them up at your own expense—or do you edge away? It probably depends on the friend (and the problem). If someone embarrasses themselves in public, do you sully your own reputation to stand up for their worth? Or do you silently hope not to be associated with them? If they are dying, do you hold their hand, even if it destroys you? Or do you hope that someone else is doing that, and become someone they know less well?

Where a person fits on this line would seem to radically change their incentives around you. Someone firmly in your ‘worth keeping’ zone does better to let you see their problems than to hide them. Because you probably won’t give up on them, and you might help. Since everyone has problems, and they take effort to hide, this person is just a lot freer around you. If instead every problem hastens a person’s replacement, they should probably not only hide their problems, but also many of their other details, which are somehow entwined with problems.

(A related question is when you should let people know where they stand with you. Prima facie, it seems good to make sure people know when they are safe. But that means it also being clearer when a person is not safe, which has downsides.)

If there are better replacements in general, then you will be inclined to replace things more readily. If you can press a button to have a great new car appear, then you won’t have the same car for long.

The social analog is that in a community where friends are more replaceable—for instance, because everyone is extremely well selected to be similar on important axes—it should be harder to be close to anyone, or to feel safe and accepted. Even while everyone is unusually much on the same team, and unusually well suited to one another.

Are ethical asymmetries from property rights?

These are some intuitions people often have:

  • You are not required to save a random person, but you are definitely not allowed to kill one
  • You are not required to create a person, but you are definitely not allowed to kill one
  • You are not required to create a happy person, but you are definitely not allowed to create a miserable one
  • You are not required to help a random person who will be in a dire situation otherwise, but you are definitely not allowed to put someone in a dire situation
  • You are not required to save a person in front of a runaway train, but you are definitely not allowed to push someone in front of a train. By extension, you are not required to save five people in front of a runaway train, and if you have to push someone in front of the train to do it, then you are not allowed.

Here are some more:

  • You are not strongly required to give me your bread, but you are not allowed to take mine
  • You are not strongly required to lend me your car, but you are not allowed to unilaterally borrow mine
  • You are not strongly required to send me money, but you are not allowed to take mine

The former are ethical intuitions. The latter are implications of a basic system of property rights. Yet they seem very similar. The ethical intuitions seem to just be property rights as applied to lives and welfare. Your life is your property. I’m not allowed to take it, but I’m not obliged to give it to you if you don’t by default have it. Your welfare is your property. I’m not allowed to lessen what you have, but I don’t have to give you more of it.

[Edited to add: A basic system of property rights means assigning each thing to a person, who is then allowed to decide what happens to that thing. This gives rise to asymmetry because taking another person’s things is not allowed (since they are in charge of them, not you), but giving them more things is neutral (since you are in charge of your things and can do what you like with them).]

My guess is that these ethical asymmetries—which are confusing, because they defy consequentialism—are part of the mental equipment we have for upholding property rights.

In particular these well-known asymmetries seem to be explained well by property rights:

  • The act-omission distinction naturally arises where an act would involve taking someone else’s property (broadly construed—e.g. their life, their welfare), while an omission would merely fail to give them additional property (e.g. life that they are not by default going to have, additional welfare).
  • ‘The asymmetry’ between creating happy and miserable people is because to create a miserable person is to give that person something negative, which is to take away what they have, while creating a happy person is giving that person something extra.
  • Person-affecting views arise because birth gives someone a thing they don’t have, whereas death takes a thing from them.

Further evidence that these intuitive asymmetries are based on upholding property rights: we also have moral-feeling intuitions about more straightforward property rights. Stealing is wrong.

If I am right that we have these asymmetrical ethical intuitions as part of a scheme to uphold property rights, what would that imply?

It might imply something about when we want to uphold them, or consider them part of ethics, beyond their instrumental value. Property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources—which is to say, to somehow use the resources with low levels of conflict and destruction. They do not appear to be a system for people to achieve specific goals, e.g. whatever is actually good. Unless what is good is exactly the smooth sharing of resources.

I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons and just reason about the consequentialist value of upholding property rights? If we have the moral intuition, does that make the thing of moral value, regardless of its origins? Is pragmatic rules for social cohesion all that ethics is anyway? Questions for another time perhaps (when we are sorting out meta-ethics anyway).

A more straightforward implication is for how we try to explain these ethical asymmetries. If we have an intuition about an asymmetry which stems from upholding property rights, it would seem to be a mistake to treat it as evidence about an asymmetry in consequences, e.g. in value accruing to a person. For instance, perhaps I feel that I am not obliged to create a life, by having a child. Then—if I suppose that my intuitions are about producing goodness—I might think that creating a life is of neutral value, or is of no value to the created child. When in fact the intuition exists because allocating things to owners is a useful way to avoid social conflict. That intuition is part of a structure that is known to be agnostic about benefits to people from me giving them my stuff. If I’m right that these intuitions come from upholding property rights, this seems like an error that is actually happening.