Doers or doings?

A girl recently invited me to a public lecture she was running with Helen Caldicott, the famous anti-nuclear advocate. Except the girl couldn’t remember the bit after ‘famous’. When I asked her, she narrowed it down to something big picture related to the environment. Helen’s achievements were obviously secondary, if not twenty-secondary, in motivating her to organize the event. Though the fact she was famous for whatever those things were was important.

I’ve done a few courses in science journalism. The main task there is to make science interesting and intelligible for people. The easiest way to do this is to cut down on the dry bit about how reality works, and fill it up with stories about people instead. Who are the scientists? Where they are from? What sorts of people are they? What’s it like to be a research subject? Does the research support the left or the right or people who want to subsidize sheep or not immunize their children? If there’s an unsettled issue, present it as a dispute between scientists, not as abstract disagreeing evidence.

It’s hard to find popular science books that aren’t at least half full of anecdotes or biographies of scientists. Everybody knows that Einstein invented the theory of relativity, but hardly anyone knows what that’s about exactly, or tries to.

Looking through a newspaper, most of the stories are about people. Policy isn’t discussed so much as politics. Recessions are reported with tales of particular people who can’t pay their employees this year.

Philosophy is largely about philosophers from what I can gather.

One might conclude that most people are more interested in people than in whatever it is the people are doing. What people do is mainly interesting for what it says about those doing it.

But this isn’t true, there are some topics where people are happy to read about the topic more than the people. The weather and technology for instance. Nobody knows who invented most things they know intimately. It looks from this small list like people are more interested in doings which immediately affect them, and doers the rest of the time. I don’t read most topics though, and it’s a small sample. What other topics are people more interested in than they are in those who do them?

Going on that tentative theory, this blog is probably way too related to its subject matter for its subject matter. Would you all like some more anecdotes and personal information? I included some above just in case, as I sat in my friend Robert Wiblin‘s dining room and drank coffee, which I like, from the new orange plunger I excitedly bought yesterday on my way to the highly ranked Australian National University, where I share an office with a host of stylishly dressed and interesting students tirelessly working away on something or another really important.

Limited kindness is unappreciated

If you have not yet interacted with a person, you are judged neutrally by them. If you do something for them once, then you move up in their eyes. If you continue to benefit them you can move further up. If you stop you move to well below zero; you have actually slighted them. Even if you slow down a bit you can go into negative territory. This goes for many things humans offer each other from tea to sex. Why is limited attention worse than none?

One guess is that it’s an upshot of tit-for-tat. If I am nice to someone, they are nice to me in return, as obliged. Then I am obliged. Mentioning that the interaction has occurred an even number of times doesn’t get you off the hook; you  always owe more friendly deeds.

Another potential reason is that when you haven’t interacted with someone they still have high hopes you will be a good person to know, whereas when you know them and cease to give them attention, you are demonstrably not. This doesn’t seem right, as strangers usually remain strangers, and people who have had an interest often return to it.

Perhaps un-friendliness is a punishment to encourage your future cooperation? People who have been useful in the past are a better target than others because they are presumably already close to being friendly again. If I’m wondering whether to phone you or not and I think you will be miffed if I haven’t it may push me over the line, whereas if we haven’t met and I think you might be miffed when we eventually do, I probably won’t bother because I probably will never meet you or want to anyway.

For whatever reason, this must reduce the occurrence of friendly behavior enourmously. Before you interact with someone you must ascertain that they are likely enough to be good enough for long enough that it’s worth the cost of their badmouthing or teary appeals to stay if you ever decide they’re not.  This certainly limits my own friendliness  – often I wouldn’t mind being helpful to strangers, but I’ve learned the annoying way how easy it is to become an obligated ‘friend’ just because you can’t bear to watch someone suffer on a single occasion. So other people prevent me from benefiting them with their implicit threat of obligation.

Interestingly, one situation where humans are nice to one another and not further obliged is when they trade fairly at the outset, such as in shops. This supports the tit-for-tat theory.

‘Cheap’ goals won’t explode intelligence

An intelligence explosion is what hypothetically happens when a clever creature finds that the best way to achieve its goals is to make itself even cleverer first, and then to do so again and again as its heightened intelligence makes the the further investment cheaper and cheaper. Eventually the creature becomes uberclever and can magically (from humans’ perspective) do most things, such as end humanity in pursuit of stuff it likes more. This is predicted by some to be the likely outcome for artificial intelligence, probably as an accidental result of a smart enough AI going too far with any goal other than forwarding everything that humans care about.

In trying to get to most goals, people don’t invest and invest until they explode with investment. Why is this? Because it quickly becomes cheaper to actually fulfil a goal at than it is to invest more and then fulfil it. This happens earlier the cheaper the initial goal. Years of engineering education prior to building a rocket will speed up the project, but it would slow down the building of a sandwich.

A creature should only invest in many levels of intelligence improvement when it is pursuing goals significantly more resource intensive than creating many levels of intelligence improvement. It doesn’t matter that inventing new improvements to artificial intelligence gets easier as you are smarter, because everything else does too.  If intelligence makes other goals easier a the same rate as it makes building more intelligence easier, no goal which is cheaper than building a given amount of intelligence improvement with your current intelligence could cause  an intelligence explosion of that size.

Plenty of questions anyone is currently looking for answers to, such as ‘how do we make super duper nanotechnology?’, ‘how do we cure AIDS?’, ‘how do I get really really rich?’ and even a whole bunch of math questions are likely easier than inventing multiple big advances in AI. The main dangerous goals are infinitely expensive questions such as ‘how many digits of pi can we work out?’ and ‘please manifest our values maximally throughout as much of the universe as possible’. If someone were to build a smart AI and set it to solve any of those relatively cheap goals, it would not accidentally lead to an intelligence explosion. The risk is only with the very expensive goals.

The relative safety of smaller goals here could be confused with the relative safety of goals that comprise a small part of human values. A big fear with an intelligence explosion is that the AI will only know about a few of human goals, so will destroy everything else humans care about in pursuit of them. Notice that these are two different parameters: the proportion of the set of important goals the intelligence knows about and the expense of carrying out the task. Safest are cheap tasks where the AI knows about many of our values it may influence. Worst are potentially infinitely expensive goals with a tiny set of relevant values, such as any variation on ‘do as much of x as you can’.

How does Facebook make overt self obsession ok?

People who talk about themselves a lot are generally disliked. A likable person will instead subtly direct conversation to where others request the information they want to reveal. Revealing good news about yourself is a good sign, but wanting to reveal good news about yourself is a bad sign. Best to do it without wanting to.

This appears true of most human interaction, but apparently not of that on Facebook. On Facebook, when you are not posting photographs of yourself and updating people on your activities, you are writing notes listing twenty things nobody knows about you, linking people to analyses of your personality, or alerting them to your recent personal and group affiliations. Most of this is unasked for by others. I assume it is similar for other social networking sites.

If over lunch I decided, without your suggestion, to list to you twenty random facts about me, tell you the names of all my new acquintences, and show you my collection of photos of myself, our friendship would soon wane. Why is Facebook different? Here are some reasons I can think of:

  1. It is ok to talk about yourself when asked, and in a space where communication is very public to a group, nobody knows if you were asked by someone else. This seems the case for the self obsessed notes prefaced with ‘seeing as so many of you have nagged me to do this I guess I will reluctantly write a short essay on myself’ and such things, but I doubt it applies the rest of the time.
  2. Most writing on Facebook isn’t directed at anyone, and people are not forced to read it. It is the boredom and annoyance of being forced to hear about other people’s lives that puts people off those who discuss themselves too much, not signaling. This doesn’t explain why people spend so much time reading about one another on Facebook.
  3. Forcing a specific other person to listen to you go on about yourself is a dominance move. Describing yourself endlessly into cyberspace isn’t, as it’s not directed at anyone. This doesn’t explain why it would also look bad to decorate your house with posters of yourself or offer free newsletters about your exploits.
  4. The implicit rules on Facebook say that you must talk about yourself. Everyone is happy with this, as it lets them talk about themselves. So they don’t punish people who talk about themselves a lot there. And thus a new equilibrium was formed. But shouldn’t talking about yourself more still send the same signals? And why wouldn’t this have happened elsewhere?

Philosophy of mind review

I recently read A Brief Introduction to the Philosophy of Mind, a short undergraduate text. I didn’t understand some bits, but I’m not sure if that’s because the book wasn’t that good or philosophy isn’t or I’m not. Here I list them, for you to enlighten me on:

1. It’s apparently standard to use what you do or don’t want to believe as evidence for what is true. E.g. A legitimate criticism of parallelism and epiphenomenalism is that they are ‘fatalistic’. If a theory means that aliens wouldn’t feel the same as us, then it is too anthropomorphic. The problem of other minds implies that we don’t know how others feel, but we tend to assume we do, therefore we do and anything that implies otherwise is wrong. “Externalism, then, opens the door to an unpalatable form of skepticism, and this is reason enough to adopt internalism instead.” Is there some legit reason for this?

2. It’s apparently standard to use the fact that you can imagine a situation where the theory wouldn’t hold as evidence that it isn’t true. E.g. That you can imagine someone with a different brain state and the same mind state is evidence against their coincidence. You can imagine zombies, so functions or brain states can’t determine mental states. It would be correct to say that your previous concept of x can’t determine y if you can imagine it varying with the same y, but it’s not evidence that the concept can’t be extended to coincide.

3. An argument against the interaction between mind and brain necessary for dualism: “..The mind is non-physical and so does not occupy space. If the mind cannot occupy space, there can be no place in the brain or space where interaction happens”. Why does causality have to take up space?

4. Parallelism (the version of dualism where there is no interaction between mind and body, but it so happens that they coincide, thanks to God or something else conveniently external) is not criticized for the parallel existence of a physical world being completely unnecessary to explain what we see if it doesn’t interact with our minds.

5. An argument given against brain states coinciding with mental states is that a variety of brain states produce roughly the same mental states – for instance hearing the sound of bells ringing coincides with quite different brain states in someone whose brain has been partly damaged and the relevant parts replaced by other neuroplastic brain regions, but we assume the experience is basically the same. Similarly, for reasons mentioned in 1 we would like to think aliens with different brains have the same feelings. Apparently, ‘these kinds of considerations have motivated philosophers (e.g., Jerry Fodor) to adopt an idea called the principle of multiple realization. According to this principle…the same type…of mental state, such as the sensation of pain, can exist in a variety of different complex physical systems. Thus it is possible for…forms of life to share the same kinds of mental states though they might have nothing in common at the physical level. This principle…has led many philosophers to abandon the identity theory as a viable theory of mind.’ But the evidence that other people or creatures have similar mental states to you is by analogy to you, and analogy becomes weaker as you know their brains are significantly different – there is no reason to suppose that a different creature feels exactly the same as you. Also you can say brain states coincide with mental states while maintaining that a broad class of brain states correspond to similar mental states. Obviously a variety of brain states coincide with variations on ‘hearing bells ring’ if you can hear bells ring while hearing other things, or after you have learned something, or when you are sleepy. You can say the brain states have something in common without requiring they be identical. There is no evidence that they have ‘nothing in common physically’. I don’t see why there being more than one exact brain state that coincides with apparent pain refutes an identity between brains and minds.

6. Functionalism is put forward as an explanation of consciousness. It doesn’t seem to explain qualia, because someone with an inverted colour spectrum of qualia would presumably behave the same. To which functionalists apparently argue that this doesn’t matter that much and such differences between experiences are probably common by virtue of functions being implemented differently in different brains. But if brain states other than functions characterize conscious experience, it seems you have gone back to some theory where any old non-functional brain states determine mental states anyway. Or does the presence of just any ‘function’ cause awareness, then other things determine what the awareness is of? What classes as a ‘function’ anyway? Something that evolution was actually trying to achieve?

7. To decide whether folk psychology can be eliminated by eliminative materialism, one question given is whether it is a theory (because there is a precedent of other theories being eliminated). The fact that it gives false predictions sometimes and we don’t discard it is said to show it isn’t a theory. “If a scientific theory yields even one false prediction, this is usually reason enough to think it is a bad theory and ought to be abandoned or amended”. True for some theories maybe, but not for theories about likely behavior  of messy systems, such as those in social science and psychology. And why can’t it be eliminated if it’s not a theory? If it’s something like a theory except wrong more often, does that protect it somehow?

8. Supervenience is the idea that mental properties depend on physical ones, but can’t be reduced to them entirely. Arguments given against this: a) Supervenience wouldn’t imply that physical properties cause mental ones – it could still be vice versa. We want to think physical properties are primary for some unexplained reason. Therefore supervenience is unsatisfactory. But if physical properties causing mental is necessary in a theory for some reason, doesn’t that just narrow it down to ‘supervenience + physical causes mental’ theory being true? b) Supervenience doesn’t actually explain anything – it just describes the relationship. But what is an explanation other than a simpler description which includes the phenomena you wanted explained? What would an explanation look like?

9. What determines the content of a mental state? Internalism says the contents of your mind, externalism says your relationships to external things. Seems like a pointless definition question – supplying a label and asking what it defines. You can categorize thoughts according to either. I must be missing something here.