Category Archives: 1

Dignity

Dignity is apparently big in parts of ethics, particularly as a reason to stop others doing anything ‘unnatural’ regarding their bodies, such as selling their organs, modifying themselves or reproducing in unusual ways. Dignity apparently belongs to you except that you aren’t allowed to sell it or renounce it. Nobody who finds it important seems keen to give it a precise meaning. So I wondered if there was some definition floating around that would sensibly warrant the claims that dignity is important and is imperiled by futuristic behaviours.

These are the ones I came across variations on often:

The state or quality of being worthy of respect

An innate moral worthiness, often considered specific to homo sapiens.

Being respected by other people is sure handy, but so are all the other things we trade off against one another at our own whims. Money is great too for instance, but it’s no sin to diminish your wealth. Plus plenty of things people already do make other people respect them less, without anyone thinking there’s some ethical case for banning them. Where are the papers condemning being employed as a cleaner, making jokes that aren’t very funny, or drunkenly revealing your embarrassing desires? The mere act of failing to become well read and stylishly dressed is an affront to your personal dignity.

This may seem silly; surely when people argue about dignity in ethics they are talking about the other, higher definition – the innate worthiness that humans have, not some concrete fact about how others treat you. Apparently not though. When people discuss organ donation for instance, there is no increased likelihood of ceasing to be human and losing whatever dollop of inherent worth that comes with it during the operation just because cash was exchanged. Just plain old risk that people will think ill of you if you sell yourself.

The second definition, if it innately applies to humans without consideration for their characteristics, is presumably harder to lose. It’s also impossible to use. How you are treated by people is determined by what those people think of you.  You can have as much immeasurable innate worthiness as you like; you will still be spat on if people disagree with reality, which they probably will with no faculties for perceiving innate moral values. Reality doesn’t offer any perks to being inherently worthy either. So why care if you have this kind of dignity, even if you think such a thing exists?

Doers or doings?

A girl recently invited me to a public lecture she was running with Helen Caldicott, the famous anti-nuclear advocate. Except the girl couldn’t remember the bit after ‘famous’. When I asked her, she narrowed it down to something big picture related to the environment. Helen’s achievements were obviously secondary, if not twenty-secondary, in motivating her to organize the event. Though the fact she was famous for whatever those things were was important.

I’ve done a few courses in science journalism. The main task there is to make science interesting and intelligible for people. The easiest way to do this is to cut down on the dry bit about how reality works, and fill it up with stories about people instead. Who are the scientists? Where they are from? What sorts of people are they? What’s it like to be a research subject? Does the research support the left or the right or people who want to subsidize sheep or not immunize their children? If there’s an unsettled issue, present it as a dispute between scientists, not as abstract disagreeing evidence.

It’s hard to find popular science books that aren’t at least half full of anecdotes or biographies of scientists. Everybody knows that Einstein invented the theory of relativity, but hardly anyone knows what that’s about exactly, or tries to.

Looking through a newspaper, most of the stories are about people. Policy isn’t discussed so much as politics. Recessions are reported with tales of particular people who can’t pay their employees this year.

Philosophy is largely about philosophers from what I can gather.

One might conclude that most people are more interested in people than in whatever it is the people are doing. What people do is mainly interesting for what it says about those doing it.

But this isn’t true, there are some topics where people are happy to read about the topic more than the people. The weather and technology for instance. Nobody knows who invented most things they know intimately. It looks from this small list like people are more interested in doings which immediately affect them, and doers the rest of the time. I don’t read most topics though, and it’s a small sample. What other topics are people more interested in than they are in those who do them?

Going on that tentative theory, this blog is probably way too related to its subject matter for its subject matter. Would you all like some more anecdotes and personal information? I included some above just in case, as I sat in my friend Robert Wiblin‘s dining room and drank coffee, which I like, from the new orange plunger I excitedly bought yesterday on my way to the highly ranked Australian National University, where I share an office with a host of stylishly dressed and interesting students tirelessly working away on something or another really important.

Limited kindness is unappreciated

If you have not yet interacted with a person, you are judged neutrally by them. If you do something for them once, then you move up in their eyes. If you continue to benefit them you can move further up. If you stop you move to well below zero; you have actually slighted them. Even if you slow down a bit you can go into negative territory. This goes for many things humans offer each other from tea to sex. Why is limited attention worse than none?

One guess is that it’s an upshot of tit-for-tat. If I am nice to someone, they are nice to me in return, as obliged. Then I am obliged. Mentioning that the interaction has occurred an even number of times doesn’t get you off the hook; you  always owe more friendly deeds.

Another potential reason is that when you haven’t interacted with someone they still have high hopes you will be a good person to know, whereas when you know them and cease to give them attention, you are demonstrably not. This doesn’t seem right, as strangers usually remain strangers, and people who have had an interest often return to it.

Perhaps un-friendliness is a punishment to encourage your future cooperation? People who have been useful in the past are a better target than others because they are presumably already close to being friendly again. If I’m wondering whether to phone you or not and I think you will be miffed if I haven’t it may push me over the line, whereas if we haven’t met and I think you might be miffed when we eventually do, I probably won’t bother because I probably will never meet you or want to anyway.

For whatever reason, this must reduce the occurrence of friendly behavior enourmously. Before you interact with someone you must ascertain that they are likely enough to be good enough for long enough that it’s worth the cost of their badmouthing or teary appeals to stay if you ever decide they’re not.  This certainly limits my own friendliness  – often I wouldn’t mind being helpful to strangers, but I’ve learned the annoying way how easy it is to become an obligated ‘friend’ just because you can’t bear to watch someone suffer on a single occasion. So other people prevent me from benefiting them with their implicit threat of obligation.

Interestingly, one situation where humans are nice to one another and not further obliged is when they trade fairly at the outset, such as in shops. This supports the tit-for-tat theory.

‘Cheap’ goals won’t explode intelligence

An intelligence explosion is what hypothetically happens when a clever creature finds that the best way to achieve its goals is to make itself even cleverer first, and then to do so again and again as its heightened intelligence makes the the further investment cheaper and cheaper. Eventually the creature becomes uberclever and can magically (from humans’ perspective) do most things, such as end humanity in pursuit of stuff it likes more. This is predicted by some to be the likely outcome for artificial intelligence, probably as an accidental result of a smart enough AI going too far with any goal other than forwarding everything that humans care about.

In trying to get to most goals, people don’t invest and invest until they explode with investment. Why is this? Because it quickly becomes cheaper to actually fulfil a goal at than it is to invest more and then fulfil it. This happens earlier the cheaper the initial goal. Years of engineering education prior to building a rocket will speed up the project, but it would slow down the building of a sandwich.

A creature should only invest in many levels of intelligence improvement when it is pursuing goals significantly more resource intensive than creating many levels of intelligence improvement. It doesn’t matter that inventing new improvements to artificial intelligence gets easier as you are smarter, because everything else does too.  If intelligence makes other goals easier a the same rate as it makes building more intelligence easier, no goal which is cheaper than building a given amount of intelligence improvement with your current intelligence could cause  an intelligence explosion of that size.

Plenty of questions anyone is currently looking for answers to, such as ‘how do we make super duper nanotechnology?’, ‘how do we cure AIDS?’, ‘how do I get really really rich?’ and even a whole bunch of math questions are likely easier than inventing multiple big advances in AI. The main dangerous goals are infinitely expensive questions such as ‘how many digits of pi can we work out?’ and ‘please manifest our values maximally throughout as much of the universe as possible’. If someone were to build a smart AI and set it to solve any of those relatively cheap goals, it would not accidentally lead to an intelligence explosion. The risk is only with the very expensive goals.

The relative safety of smaller goals here could be confused with the relative safety of goals that comprise a small part of human values. A big fear with an intelligence explosion is that the AI will only know about a few of human goals, so will destroy everything else humans care about in pursuit of them. Notice that these are two different parameters: the proportion of the set of important goals the intelligence knows about and the expense of carrying out the task. Safest are cheap tasks where the AI knows about many of our values it may influence. Worst are potentially infinitely expensive goals with a tiny set of relevant values, such as any variation on ‘do as much of x as you can’.

How does Facebook make overt self obsession ok?

People who talk about themselves a lot are generally disliked. A likable person will instead subtly direct conversation to where others request the information they want to reveal. Revealing good news about yourself is a good sign, but wanting to reveal good news about yourself is a bad sign. Best to do it without wanting to.

This appears true of most human interaction, but apparently not of that on Facebook. On Facebook, when you are not posting photographs of yourself and updating people on your activities, you are writing notes listing twenty things nobody knows about you, linking people to analyses of your personality, or alerting them to your recent personal and group affiliations. Most of this is unasked for by others. I assume it is similar for other social networking sites.

If over lunch I decided, without your suggestion, to list to you twenty random facts about me, tell you the names of all my new acquintences, and show you my collection of photos of myself, our friendship would soon wane. Why is Facebook different? Here are some reasons I can think of:

  1. It is ok to talk about yourself when asked, and in a space where communication is very public to a group, nobody knows if you were asked by someone else. This seems the case for the self obsessed notes prefaced with ‘seeing as so many of you have nagged me to do this I guess I will reluctantly write a short essay on myself’ and such things, but I doubt it applies the rest of the time.
  2. Most writing on Facebook isn’t directed at anyone, and people are not forced to read it. It is the boredom and annoyance of being forced to hear about other people’s lives that puts people off those who discuss themselves too much, not signaling. This doesn’t explain why people spend so much time reading about one another on Facebook.
  3. Forcing a specific other person to listen to you go on about yourself is a dominance move. Describing yourself endlessly into cyberspace isn’t, as it’s not directed at anyone. This doesn’t explain why it would also look bad to decorate your house with posters of yourself or offer free newsletters about your exploits.
  4. The implicit rules on Facebook say that you must talk about yourself. Everyone is happy with this, as it lets them talk about themselves. So they don’t punish people who talk about themselves a lot there. And thus a new equilibrium was formed. But shouldn’t talking about yourself more still send the same signals? And why wouldn’t this have happened elsewhere?