Category Archives: 1

Systems and stories

Tyler claims we think of most things in terms of stories, which he says is both largely inevitable and one of our biggest biases.

He includes the abstractions of non fiction as ‘stories’, and recommends ‘messiness’ as a better organising principle for understanding our lives and other things. But the problems with stories that Tyler mentions apply mostly to narrative stories, not other abstractions such as scientific ‘stories’. It looks to me like we think about narrative stories and other abstractions quite differently, so should not lump them together. I suspect we would do better to shift more to thinking in terms of other abstractions than to focus on messiness, but I’ll get to that later. First, let me describe the differences between these styles of thought.

I will call the type of thought we use for narrative stories such as fiction and most social interactions ‘story thought’. I will call the style of thought we use for other abstractions ‘system thought’. This is what we use to think about maths for instance.  They are both used by all people, but to different degrees on different topics.

Here are the differences between story thought and system thought I think I see, plus a few from Tyler. It’s a tentative list, so please criticize generously and give me more to add.

Agents

Role of agents
Stories are made out of agents, whereas systems are made out of the math and physics which is intuitive to us. Systems occasionally model agents, but in system thought agents are a pretty complex, obscure thing for a system to have. In story thought we expect everything to be intentional.

Perspective
Stories are usually from an agent’s perspective, systems are understood from an objective outside viewpoint. Even if a story doesn’t have a narrator, there is usually a protagonist or several, plus less detailed characters stretching off into the distance.

Unique identity
The agents that stories are made of always have unique identities, even if there is more than one with basically the same characteristics. In system thought units are interchangeable, except they may have varying quantities of generic parameters. ‘You’ are a set of preferences, a gender, an income level, a location, and some other things. In story thought, any ambiguity about whether someone is the same person as they used to be is a big issue, and the whole story is about working out a definitive answer. In system thought it’s a meaningless question.

Good, evil and indifference

Ought and is

Story thought is concerned largely with judging the virtue of things, whereas system thought is mostly concerned with what happens. Stories are full of good and evil characters and actions, duties, desires, and normative messages. If system thought is used for thinking about ‘ought’ questions, this is done by choosing a parameter to care about and simply maximizing it, or choosing a particular goal, such as for a car to work. In story thought goodness doesn’t relate to quantities of anything in particular and you don’t ponder it by adding up anything. People who want to think about human interactions in terms of systems sometimes get around this by calling anything humans like ‘utility’, then adding that up. This irritates people who don’t want to think of stories in system terms.

Motives

In stories, intentions tend to be strongly related to inherent goodness or evilness. If they are not intentionally directed at big good or evil goals, they are meant to be understood as strong signals about the person’s character. Systems don’t have an analog.

Meaning

Overarching meaning
Stories often have an overall moral or a point. That is, a story as a whole tends to contain a normative message for the observer. Systems don’t.

Other meanings and symbolism
Further meaning can be read into both stories and systems. However in stories this is based on superficial similarity and is intended to say something important, whereas in systems it’s based on structural similarity, is not intended, and may not be important. If you see a black cat cross your path, story thought says further dark things may cross your metaphoric path, while system thought might say animals in general can probably cross many approximately flat surfaces.

Mechanics

No levels below social
In stories everything occurs because of social level dynamics. Lower levels of abstraction such as physics and chemistry can’t instigate events. In reality it would be absurd to think a coffee fell on your lap so that you would have an awkward encounter with your future lover ten minutes later, but in story thought it would be absurd for a coffee to fall on your lap because it caught your sleeve. Even events that weren’t supposedly intended by any characters are for a social level purpose. Curiously the phrase ‘everything happens for a reason’ is used to talk about systems and stories, but the ‘reasons’ are in opposite temporal directions. In system thought it means everything is necessitated somehow by the previous state of the system, in story thought it means every occurrence will have future social significance if it does not already.

Is and ought interaction
If a system contains a parameter you care about, the fact you care about it doesn’t affect how the system works. In story thought you can expect how you treat your servant on a single occasion to influence whether you happen to run into the heroine half naked in several months.

Free will
Stories are full of people making ‘free’ choices, not determined by their characteristics yet somehow determined by them. System thought doesn’t know how to capture this incoherence to the satisfaction of story thought.

Opportunity costs and other indirect causation
In story thought the causation we notice runs in the idiosyncratic way we understand blame to do. If I cause you to do something by allowing you, and you do it badly, I did not cause it to happen badly. In an analogous system, we do say that if a rock lands on a roof, and the roof doesn’t hold the rock well, the collapse was partly caused by the rock’s landing place.

Story causation also doesn’t include opportunity costs much, unless they are intentional: I didn’t cause Africans to suffer horribly this year by buying movie tickets instead of paying to deworm them, and nor did all of the similarly neglectful story heroes ever. In an analogous system, oxygen reacting with hydrogen quite obviously causes less oxygen to remain to react later with anything else.

Probability
The main components of a story need only be plausible, they need not be likely. Story thought notices if the hero is happy when his girlfriend dies, but doesn’t mind much if he happens to find himself in a situation central to the future of his planet. System thought on the other hand is mostly disinterested with the extremes of possibility, and more concerned with normal behavior. Nobody cares much if it’s possible that your spending a dollar will somehow lead to the economy crashing.

This is probably to do with free will being a big part of stories. Things only need to be possible for someone with free will to do them. To ask why a character happens to be right and good when everyone else isn’t is a strange question to story thought. He’s good and right because he wants to be, and they all don’t want to be. Specific characters are to blame.

Time
In stories events tend to unfold in sequence, whereas they can occur in parallel in systems, or there might not be time.

Adeptness of our minds

Story thought is automatic, easy, compelling, and fun. System thought is harder and less compelling if it contradicts story thought. It can be fun, but often isn’t.

Might law save us from uncaring AI?

Robin has claimed a few times that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.

When is it efficient to kill humans?

At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?

What does law do?

In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law  basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong. Continue reading

When is forced organ selfishness good for you?

Simon Rippon claims having a market for organs might harm society:

It might first be thought that it can never be a good thing for you to have fewer rather than more options. But I believe that this attitude is mistaken on a number of grounds. For one, consider that others hold you accountable for not making the choices that are necessary in order to fulfil your obligations. As things stand, even if you had no possessions to sell and could not find a job, nobody could criticize you for failing to sell an organ to meet your rent. If a free market in body parts were permitted and became widespread, they would become economic resources like any other, in the context of the market. Selling your organs would become something that is simply expected of you when the financial need arises. A new “option” can thus easily be transformed into an obligation, and it can drastically change the attitudes that it is appropriate for others to adopt towards you in a particular context.

He’s right that at that moment where you would normally throw your hands in the air and move on, you are worse off if an organ market gives you the option of paying more debts before declaring your bankruptcy. But this is true for anything you can sell. Do we happen to have just the right number of salable possessions? By Simon’s argument people should benefit from bans on selling all sorts of things. For instance labor. People (the poor especially) are constantly forced to sell their time – such an integral part of their selves – to pay rent, and other debts they had no choice but to induce. If only they were protected from this huge obligation that we laughably call an ‘option’. Such a ban might be costly for the landlord, but it would be good for the poor people, right? No! The landlords would react and not rent to them.

So why shouldn’t we expect the opposite effect if people are allowed to sell more of their possessions? People who currently don’t have the assets or secure income to be trusted with loans or ongoing rental payments might be legitimately offered such things if they had another asset to sell. Think of all the people who would benefit from being able to mortgage their kidney to buy a car instead of riding to some closer job while they gradually save up.

In general when negotiating, it’s best to not have options that are worse for you. When the time comes to carry out your side of a deal, it’s true this means being forced to renege. But when making the deal beforehand, you do better to have the option of carrying out your part later, so that the other person does their part. And in a many shot game, you do best to be able to do your part the whole time, so the trading (which is better than not trading) continues.

How does information affect hookups?

With social networking sites enabling the romantically inclined to find out more about a potential lover before the first superficial chat than they previously would have in the first month of dating, this is an important question for the future of romance.

Lets assume that in looking for partners, people care somewhat about rank and and somewhat about match. That is, they want someone ‘good enough’ for them who also has interests and personality that they like.

First look at the rank component alone. Assume for a moment that people are happy to date anyone they believe is equal to or better than them in desirability. Then if everyone has a unique rank and perfect information, there will never be any dating at all. The less information they have the more errors in comparing, so the more chance that A will think B is above her while B thinks A is above him. Even if people are willing to date people somewhat less desirable than they, the same holds – by making more errors you trade wanting more desirable people for wanting less desirable people, who are more likely to want you back , even if they are making their own errors. So to the extent that people care about rank, more information means fewer hookups.

How about match then? Here it matters exactly what people want in a match. If they mostly care about their beloved having certain characteristics,  more information will let everyone hear about more people who meet their requirements. On the other hand if we mainly want to avoid people with certain characteristics, more information will strike more people off the list. We might also care about an overall average desirability of characteristics – then more information is as likely to help or harm assuming the average person is averagely desirable. Or perhaps we want some minimal level of commonality, in which case more information is always a good thing – it wouldn’t matter if you find out she is a cannibalistic alcoholic prostitute, as long as eventually you discover those board games you both like. There are more possibilities.

You may argue that you will get all the information you want in the end, the question is only speed – the hookups prevented by everyone knowing more initially are those that would have failed later anyway. However flaws that dissuade you from approaching one person with a barge pole are often ‘endearing’ when you discover them too late, and once they are in place loving delusions can hide or remove attention from more flaws, so the rate of information discovery matters. To the extent we care about rank then, more information should mean fewer relationships. To the extent we care about match, it’s unclear without knowing more about what we want.

SIA on other minds

Another interesting implication if the self indication assumption (SIA) is right is that solipsism is much less likely correct than you previously thought, and relatedly the problem of other minds is less problematic.

Solipsists think they are unjustified in believing in a world external to their minds, as one only ever knows one’s own mind and there is no obvious reason the patterns in it should be driven by something else (curiously, holding such a position does not entirely dissuade people from trying to convince others of it). This can then be debated on grounds of whether a single mind imagining the world is more or less complex than a world causing such a mind to imagine a world.

The problem of other minds is that even if you believe in the outside world that you can see, you can’t see other minds. Most of the evidence for them is by analogy to yourself, which is only one ambiguous data point (should I infer that all humans are probably conscious? All things? All girls? All rooms at night time?).

SIA says many minds are more likely than one, given that you exist. Imagine you are wondering whether this is World 1, with a single mind among billions of zombies, or World 2, with billions of conscious minds. If you start off roughly uncertain, updating on your own conscious existence with SIA shifts the probability of world 2 to billions of times the probability of world 1.

Similarly for solipsism. Other minds probably exist. From this you may conclude the world around them does too, or just that your vat isn’t the only one.