Tag Archives: how to think

Affecting everything

People often argue that X is hugely important because it affects everything else. Sleep is so important because it affects your whole day. You should value your health more than anything because you need it for everything else. And your freedom too. And friends, and food. AI is the most important thing to work on because you could use it to get anything else. Same with anything that makes money, or gains power. Also sociology, because it’s about understanding people, and everything else we care about depends on people’s behaviour. And maths, science, and engineering are more important than anything  because they illuminate the rest of the world, which is the most important thing too. Politics is most important because it determines the policies our country runs under, which affect everything. Law is similar. I assume garbage collectors know they are doing the most important thing because without garbage disposal society would collapse.

It turns out an awful lot of things affect everything, and a lot of them affect a lot of things a lot. That something has a broad influence is certainly a good starting criteria for it being important. It’s just a really low bar. It shouldn’t be the whole reason anyone does science or repairs roads, because it doesn’t distinguish those activities from a huge number of other ones. There is more than one thing that affects everything, because the set of things we might care about are not causally organized like a tree, they are organized like a very loopy web of loops.

A segment of a social network

Even the dots on the right affect everything. Image via Wikipedia

Often this ‘affects everything’ criterion is not even used on any relevant margin. It is used in the sense that if you didn’t have sleep or any understanding of humans at all you would be in a much worse situation than if you had these things in abundance. A better question is whether sleeping another half hour or dedicating your own career to sociology is going to make a huge difference to everything. An even better question is whether it’s going to make an even bigger difference to everything than anything else you could do with that half hour or career. This is pretty well known, and applied in many circumstances, but for some reason it doesn’t stop people arguing from the interconnectedness of everything to the maximal importance of whatever they are doing.

Perhaps it is psychologically useful to have an all purpose excuse for anyone doing anything that contributes at all to our hugely interconnected society to feel like they are doing the most important thing ever. But if you really want to do something unusually useful, you’ll need a stronger criterion than ‘it affects everything’.

How to talk to yourself

Scandinavian Airlines (SAS) airplane on Kiruna...

Image via Wikipedia

Mental module 2: Eeek! Don’t make me go on that airplane! We will surely die! No no no!

Mental module 1: There is less than one in a million chance we die if we get on that airplane, based on actual statistics from as far as you are concerned identical airplanes.

Mental module 2: No!! it’s a big metal box in the sky – that can’t work. Panic! Panic!

Mental module 1: If we didn’t have an incredible pile of data from other big metal boxes in the sky your argument would have non-negligible bearing on the situation.

Mental module 2: but what if it crashes??

Mental module 1: Our lives would be much nicer if you paid attention to probabilities as well as how you feel about outcomes.

Mental module 2: It will shudder and tip over and we will not know how to update our priors on that, and we will be terrified, briefly, before we die!

Mental module 1: If it shuddering and tipping over were actually good evidence the plane was going to crash, there would presently be an incredibly small chance of them occurring, so you need not worry.

Mental module 2: We could crash into the rocks!!! Rocks! In our face! at terminal velocity! And bits of airplane! Do you remember that movie where an airplane crashed? There were bits of burning people everywhere. And what about those pictures you saw on the news? It’s going to be terrible. Even if we survive we will probably be badly injured and in the middle of a jungle, like that girl on that documentary. And what if we get deep vein thrombosis? We might struggle half way out of the jungle on one leg only to get a pulmonary embolism and suddenly die with no hope of medical help, which probably wouldn’t help anyway.

Mental module 1: (realizing something) But Me 2, we identify with being rational, like clever people we respect. Thinking the plane is going to crash is not rational.

Mental module 2: Yeah, rationality! I am so rational. Rationality is the greatest thing, and we care about it infinitely much! Who cares if the plane is really going to crash – I sure won’t believe it will, because that’s not rational!

Mental module 1: (struggling to overcome normal urges) Yes, now you understand.

Mental module 2: and even when it’s falling from the sky I won’t be scared, because that would not be rational! And when we smash into the ground, we will die for rationality! Behold my rationality!

Mental module 1: (to herself and onlookers from non-fictional universes) It may seem reasonable to reason with yourself, but after years of attempting it – just because that’s what come’s naturally – I think doing so relies on a false assumption. Which is that other mental modules are like me somewhere deep down, and will eventually be moved by reasonable arguments, if only they get enough of them to overcome their inferior reasoning skills. Perhaps I have assumed this because I would like it to be true, or just because it is easiest to picture others as being like oneself.

In reality, the assumption is probably false. If part of your brain (or social network) doesn’t respond sensibly to information for the first week – or decade – of your acquaintance, you should be entertaining the possibility that they are completely insane. It is not obvious that well reasoned arguments are the best strategy for dealing with an insane creature, or for that matter with almost any object. Well reasoned arguments are probably not what you use with your ferret or your fire alarm.

Even if the mental module’s arguments are always only a bit flawed and can easily be corrected, resist the temptation to persist in correcting them if it isn’t working. An ongoing stream of slightly inaccurate arguments leading to the same conclusion is a sign that the arguments and the conclusion are causally connected in the wrong direction. In such cases, accuracy is futile.

Mental module 2 is a prime example, alas. She basically just expresses and reacts to emotions connected to whatever has her attention, and jumps to ‘implications’ through superficial associations. She doesn’t really do inference and probability is a foreign concept. The effective ways to cooperate with her then are to distract her with something prompting more convenient emotions, or to direct her attention toward different emotional responses connected to the present issue. Identifying with being rational is a useful trick because it provides a convenient alternative emotional imperative – to follow the directions of the more reasonable part of oneself – in any situation where the irrational mental module can picture a rationalist.

Mental module 2: Oh yes! I’m so rational I tricked myself into being rational!

Estimation is the best we have

This argument seems common to many debates:

‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.

This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.

Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.

A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’

In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.

I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely.  Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.

Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.

Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?

Know thyself vs. know one another

People often aspire to the ideal of honesty, implicitly including both honesty to themselves and honesty with others. Those who care about it a lot often aim to be as honest as they can bring themselves to be, across circumstances. If the aim is to get correct information to yourself and other people however, I think this approach isn’t the greatest.

There is probably a trade off between being honest with yourself and honest to others, so trying hard to be honest to others only detriments being honest to yourself, which in turn also prevents correct information getting to others.

Why would there be a trade off? Imagine your friend said, ‘I promise that anything you tell me I will repeat to anyone who asks’. How honest would you be with that friend? If you say to yourself that you will report your thoughts to others, why wouldn’t the same effect apply?

Progress in forcing yourself to be honest to others must be somewhat an impediment to being honest to yourself. Being honest with yourself is presumably also a disincentive to your being honest with others later, but that is less of a cost, since if you are dishonest with yourself you are presumably deceiving them about those topics either way.

For example imagine you are wondering what you really think of your friend Errol’s art. If you are committed to truthfully admitting whatever the answer is to Errol or your other friends, it will be pretty tempting to sincerely interpret whatever experience you are having as ‘liking Errol’s art’. This way both you and the others come off deceived. If you were committed to lying in such circumstances, you would at least have the freedom to find out the truth yourself. This seems like the superior option for the truth-loving honesty enthusiast.

This argument relies on the assumptions that you can’t fully consciously control how deluded you are about the contents of your brain, and that the unconscious parts of your mind that control this respond to incentives. These things both seem true to me.

Poverty does not respond to incentives

I wrote a post a while back saying that preventing ‘exploitative’ trade is equivalent to preventing an armed threat by eliminating the ‘not getting shot in the head’ option. Some people countered this argument by saying that it doesn’t account for how others respond. If poor people take the option of being ‘exploited’, they won’t get offered such good alternatives in future as they will if they hold out.

This seems unlikely, but it reminds me of a real difference between these situations. If you forcibly prevent the person with the gun to their head from responding to the threat, the person holding the gun will generally want to escape making the threat, as now she has nothing to gain and everything to lose. The world on the other hand will not relent from making people poor if you prevent the poor people from responding to it.

I wonder if the misintuition about the world treating people better if they can’t give in to its ‘coercion’ is a result of familiarity with how single agent threateners behave in this situation. As a side note, this makes preventing ‘exploitative’ trade worse relative to preventing threatened parties acting on threats.