Category Archives: 1

How you will change the world

Hopefully this is obvious to many people, but it seems some smart ones at least don’t really think about it.

Suppose you have some grand goal, that many people fail at. For instance you want to revolutionise your field or start the social movement that stops poverty or build a flight search application that isn’t frustrating.

Before you think you have a perceptible hope of achieving it, you will need:

  1. Some idea of what it is that everyone else gets wrong
  2. Some strategy for avoiding that

Ok, so far so good, you may think: nobody else tries hard enough, and you will try hard enough.

Not so fast! You will also need:

  1. For the failure and the strategy to correspond with how the world actually works, rather than being things you ‘believe in’ or would like to identify with, or just interesting or novel ideas which are fun to chat about.
  2. A meta idea of why it is that nobody else has come up with your strategy for solving it. ‘Be more passionate than any one else’ seems to be a popular intended solution for instance, but it causes difficulties at this point because chances are every other idealistic youth has thought of it before. If they still failed, then you don’t yet have any reason to suppose you will do better.

Of course you don’t need all this stuff to try blindly, you just have to accept that your chances of success are very low. I think you will also often do better by directly trying to answer these questions before you start.

In defence of ignorant thinking

Suppose you want to contribute to the understanding of some subject, but you are presently ignorant about it. Should you do something closer to (a) read everything that’s been written so far, then join in, or (b) think about it yourself a lot before you even look at the basics of what others have come up with?

My guess is closer to (b), though I’m not confident. I’ll tell you why, then you can tell me why I’m wrong if you care to.

Any given topic has many ways to frame it; different assumptions to assume, axioms to emphasise, evidence to notice, questions to ask of it, and aspects to cut out or leave in or smooth out in the abstraction process. Some varieties of each of these things are much more useful than others for making progress, and even the useful ones may help with progress in different directions. When different people approach the same topic, they will do it with a different set of all of these things, because they have different intuitions about it and are familiar with different approaches and other topics. I don’t know of a better, more formal way to try out such things. Once you have understood something complex in the terms set of abstractions etc, it becomes harder to see it in other ways I think, particularly if you have to make up those other ways yourself. So if you start by reading what everyone else has said, you miss out on an opportunity to make a new way to think about it.

Most ways to think about a problem are probably unsuccessful in creating anything new of value. So you might think it’s a tragedy of the commons – it’s better for progress on a subject if each person joining it spends a bit of time at the start trying their own approach before they are familiar with the old work, but it is better for each individual if they just get on with the old work since their own approach probably won’t be any good. But if you do come up with a successful approach, I assume you are duly recompensed with status and glee and that sort of thing.

If eventually we have a perfect general understanding of how to best conceptualise topics, and how to ask the most productive questions and make the best assumptions and so on, then (a). Until then, I’m in favour of a bit of ignorant thinking. What do you think? (assuming your answer is b, or you are an expert on this topic).

Why we love unimportant things

Consider all the things humans have ever invented. On average, the ones that have been adopted by the most people should be the most useful ones. This seems to be roughly what has happened.

Now consider the ones we get really excited about, and identify with, and celebrate. These are the ones that are not widely adopted. Chairs have been adopted by everyone, because they are great. Nobody ever mentions this. You might think they are just taken for granted because they are old. But consider skis. Skis have been around forever. But they are more controversial than chairs: they have never caught on with some people. Now notice that people who do like skis actually rave about them, and think about them, and consider themselves skiing enthusiasts.

Here are some more unpopular and raved about innovations: drying fruit in the sun, dancing, the iphone, the gin and tonic, the internet, Christianity, watercolour painting, eating a larger meal at lunch time than in the evening, sexual promiscuity, tea

Here are popular uncelebrated innovations: the escalator, the hat, the mobile phone (this was on the other list back when they were rare), the phillips head screwdriver, the computer, queues, tv, bread, floors

Here are the closest things I can think of to counterexamples: the internet (really fits in the first category, but many people who love it must rarely contact with those who don’t and vice versa. Then again people who rave about it often mean to support quite extreme and unorthodox use of it), anti-racism (virtually everyone seems to think they like it, but the ones who rave about it do at least seem to think that others do not), people rave about anything they consciously want at that moment (e.g. they have been standing for ages and they find a chair, or someone brings them a big cake) though they still don’t tend to speak up that item in general or identify with it, sex.

So it seems that we largely celebrate the things that are least important to our actual wellbeing. It even looks to me like the less consensus there is on the value of something, the more impassioned are its fans. At the extreme, when people make up their very own theory or cheesecake or whatever they can often become quite obsessed.

I take all this as a sign that we basically celebrate stuff to draw attention to our identities, not because it’s important.

Hidden motives or innocent failure?

There are many ways in which what humans do differs from what they should do if they wanted to achieve the ends they claim to want to achieve. Some of these are obviously because people don’t really want what they say they want. Few people who claim human life is valuable beyond measure are unaware that small amounts of money can save lives overseas for instance.

On the other hand, many cases are obviously innocent failures of imagination or knowledge. The apparent progress humanity has made over recent millennia is not just a winding path through various signaling equilibria; we have actually thought of better stuff to do. The stone age didn’t end because making everything out of stone stopped being a credible sign of a hardworking personality.

In between there are many interesting puzzles where it isn’t clear whether hidden motives or innocent failure are to blame*. Many people strongly prefer innocent failure as a default, but in general if you can think of some improvement to the status quo, it should be pretty surprising if heaps of other people haven’t also thought of it. Even if your idea is ultimately bad, there should be some signs of people having looked into it if its deficiency isn’t obvious. Often it is clear that people have known of apparently good ideas for ages, with no sign of action. So I think there is quite a case for hidden motives explaining many of these puzzles.

Sometimes when I point out such instances, I say something like ‘ha, you aren’t trying to do what you claim – looks like you are secretly trying to do this other thing instead’. Sometimes I say something like ‘if you are trying to do X, maybe you should try doing it in this way that would achieve X rather than that other way that doesn’t seem to so well’.

I’d like to make clear that my choice of explicitly blaming hidden motives vs. suggesting alternatives as though innocent failure were the cause is not necessarily based on how likely these two explanations are. I think either presentation of such a puzzle should suggest both hypotheses to some extent. If I blame hidden motives and you feel you don’t have those hidden motives, you should question whether you are behaving efficiently. If I blame innocent failure, and you don’t feel compelled to fix the failure, you might question your motives.

I expect the truth is usually a confusing mixture of hidden motives and innocent failure. In many such intrapersonal conflicts, it seems at least clear which side outsiders should be on. For instance if two parts of a person’s mind are interested in helping other people and looking like a nice person respectively, then inasmuch as those goals diverge outsiders should side more with the part who wants to help others, because at least others get something out of that.

Outsiders are also often in a good position to do this, due to their controlling influence on the part who wants to look like a nice person. They are the people to whom you must look nice. This means they can often side with the more altruistic part (or even if there isn’t one, for their own interests) just by insisting on higher standards of credible altruistic behaviour before they will be impressed. This is one good reason for pointing out what people should do better if they really cared, even if it seems unlikely that they do. Even if not a single reader really cares, one can at least hope to give them a measure by which to be more judgemental of others’ hypocrisy.

-:-:-

*The other very plausible explanation for a discrepancy between what seems sensible and what people do is always that people are in fact behaving sensibly, and the perplexed observer is just missing something. While this is presumably common, I will ignore it here.

When to explain

It is commonly claimed that humans’ explicit conscious faculties arose for explaining to others about themselves and their intentions. Similarly when people talk about designing robots that interact with people, they often mention the usefulness of designing such robots to be able to explain to you why it is they changed your investments or rearranged your kitchen.

Perhaps this is a generally useful principle for internally complex units dealing with each other: have some part that keeps an overview of what’s going on inside and can discuss it with others.

If so, the same seems like it should be true of companies. However my experience with companies is that they are often designed specifically to prevent you from being able to get any explanations out of them. Anyone who actually makes decisions regarding you seems to be guarded by layers of people who can’t be held accountable for anything. They can sweetly lament your frustrations, agree that the policies seem unreasonable, sincerely wish you a nice day, and most importantly, have nothing to do with the policies in question and so can’t be expected to justify them or change them based on any arguments or threats you might make.

I wondered why this strategy should be different for companies, and a friend pointed out that companies do often make an effort at more high level explanations of what they are doing, though not necessarily accurate: vision statements, advertisements etc. PR is often the metaphor for how the conscious mind works after all.

So it seems the company strategy is more complex: general explanations coupled with avoidance of being required to make more detailed ones of specific cases and policies. So, is this strategy generally useful? Is it how humans behave? Is it how successful robots will behave?*

Inspired by an interaction with ETS, evidenced lately by PNC and Verizon

*assuming there is more than one