When most things are certified, like coffee or wood or insanity, the stuff is produced by one party, then someone else judges it. University is meant to be a certification of something or another, so a nagging question for all those who can think of a zillion better ways to learn things than by moving their morning sleep to a lecture theater is ‘why can’t university work like those other things?’
If the learning bit were done with a different party from the certification bit, everyone could buy their preferred manner of education, rather than being constrained by the need for it to be attached to the most prestigious certification they could get hold of. This would drastically increase efficiency for those people who learn better by reading, talking, or listening to pausable, speedupable, recordings of good lecturers elsewhere than they do by listening to someone gradually mumble tangents at them for hour-long stints, or listening to the medical autobiographies of their fellow tutorial-goers.
This is an old and seemingly good idea, assuming university is for learning stuff, so probably I should assume something else.
Many other things university could be for face the same argument – if you are meant to learn to be a ‘capable and cultivated human being’ or just show you can put your head down and do work, these could be achieved in various ways and tested later.
One explanation for binding the ‘learning’ to the certification is that the drudgery is part of the test. The point is to demonstrate something like the ability to be bored and pointlessly inconvenienced for years on end, without giving up and doing something interesting instead, purely on the vague understanding that it’s what you’re meant to do. That might be a good employee characteristic.
That good though? Surely there is far more employment related usefulness you could equip a person with in several years than just checking they have basic stamina and normal deference to social norms. Presumably just having them work cheaply for that long would tell you the same and produce more. And aren’t there plenty of jobs where the opposite characteristics, such as initiative and responding fast to suboptimal situations, are useful? Why would everyone want signals of placid obedience?
Bryan Caplan argued that university must be long because it is to show conformity and conscientiousness, and anyone can pretend at that for a short while. But why isn’t university more like the army then? People figure out that they don’t have the conformity and conscientiousness for that much faster than they do university from what I hear. University is often successfully done concurrently with spending a year or five drunk, so it’s a pretty weak test for work ethic related behaviours.
Another possible explanation is that the system made more sense at some earlier time, and is slow to change because people want to go to prestigious places and not do unusual things. While there’s no obvious reason the current setup allows more prestige, it’s been around a long time, so its institutions are way ahead prestige-wise.
Warning: this post is somewhat technical – looking at this summary should help.
1,000,000 people are in a giant urn. Each person is labeled with a number (number 1 through number 1,000,000).
A coin will be flipped. If heads, Large World wins and 999,999 people will be randomly selected from the urn. If tails, Small World wins and 1 person will be drawn from the urn.
After the coin flip, and after the sample is selected, we are told that person #X was selected (where X is an integer between 1 and 1,000,000).
Prior probability of Large World: P(heads)=0.5
Posterior probability of Large World: P(heads|person #X selected)=P(heads)=0.5
Regardless of whether the coin landed heads or tails, we knew we would be told about some person being selected. So, the fact that we were told that someone was selected tells us nothing about which world we are in.
Jason Roy argues that the self indication assumption (SIA) is equivalent to such reasoning, and thus wrong. For the self indication assumption to be legitimate it would have to be analogous to a selection procedure where you can only ever hear about person number 693465 for instance – if they don’t come up you hear nothing.
In both cases you can only hear about one person in some sense, the question is whether which person you could hear about was chosen before the experiment, or afterwards from those which came up. The self indication assumption looks at first like a case of the latter; nothing that can be called you existed before the experiment to have dibs on a particular physical arrangement if it came up, and you certainly didn’t start thinking about the self indication assumption until you were well chosen. These things are not really important though.
Which selection procedure is analogous to using SIA seems to depend on what real life thing corresponds to ‘you’ in the thought experiment when ‘you’ are told about people being pulled out of the urn. If ‘you’ are a unique entity with exactly your physical characteristics, then if you didn’t exist, you wouldn’t have heard of someone else – someone else would have heard of someone else. Here SIA stands; my number was chosen before the experiment as far as I’m concerned, even if I wasn’t there to choose it.
On the other hand ‘you’ can be thought of as an abstract observer who has the same identity regardless of characteristics. Then if a person with different characteristics existed instead of the person with your current ones, it’s just you observing a different first-person experience. Then it looks like you are taking a sample from those who exist, as in the second case, so it seems SIA fails.
This isn’t a question of which of those things exists. They are both coherent enough concepts that could refer to real things. Should they both be participating in their own style of selection procedure then, and reasoning accordingly? Your physical self discovering with utmost shock that it exists while the abstract observer looks on non-plussedly? No – they are the same person with the same knowledge now, so they should really come to the same conclusion.
Look more closely at the lot of the abstract observer. Which abstract observers get to exist if there are different numbers of people? If they can only be one person at once, then in a smaller world some observers who would have been around in the bigger world must miss out. Which means finding that you have the person with any number X should still make you update in favor of the big world, exactly as much as the entity defined by those physical characteristics should; abstract observers weren’t guaranteed to have existed exist either.
What if the abstract observer experiencing the selection procedure is defined to encompass all observerhood? There is just one observer, who always exists, and either observes lots of creatures or few, but in a disjointed manner such that it never knows if it observes more than the present one at a given time. If it finds itself observing anyone now it isn’t surprised to exist, nor to see the particular arbitrary collection of characteristics it sees – it was bound to see one or another. Now can we write off SIA?
Here the creature is in a different situation to any of Roy’s original ones. It is going to be told about all the people who come up, not just one. It is also in the strange situation of forgetting all but one of them at a time. How should it reason in this new scenario? In ball urn terms, this is like pulling all of the balls out of whatever urn comes up, one by one, but destroying your memories after each one. Since the particular characteristics don’t tell you anything here, this is basically a version of the sleeping beauty problem. Debate has continued on that for a decade, so I shan’t try to answer Roy by solving it now. SIA gives the popular ‘thirder’ position though, so looking at the selection procedure in this perspective does not undermine SIA further.
Whether you think of the selection procedure experienced by an exact set of physical characteristics, an abstract observer, or all observerhood as one, using SIA does not amount to being surprised after the fact by the unlikelihood of whatever number comes up.
I mean to write about anthropic reasoning more in future, so I offer you a quick introduction to a couple of anthropic reasoning principles. There’s also a link to it in ‘pages’ in the side bar. I’ll update it later – there are arguments I haven’t written up yet, plus I’m in the middle of reading the literature, so hope to come across more good ones there.
Tyler claims we think of most things in terms of stories, which he says is both largely inevitable and one of our biggest biases.
He includes the abstractions of non fiction as ‘stories’, and recommends ‘messiness’ as a better organising principle for understanding our lives and other things. But the problems with stories that Tyler mentions apply mostly to narrative stories, not other abstractions such as scientific ‘stories’. It looks to me like we think about narrative stories and other abstractions quite differently, so should not lump them together. I suspect we would do better to shift more to thinking in terms of other abstractions than to focus on messiness, but I’ll get to that later. First, let me describe the differences between these styles of thought.
I will call the type of thought we use for narrative stories such as fiction and most social interactions ‘story thought’. I will call the style of thought we use for other abstractions ‘system thought’. This is what we use to think about maths for instance. They are both used by all people, but to different degrees on different topics.
Here are the differences between story thought and system thought I think I see, plus a few from Tyler. It’s a tentative list, so please criticize generously and give me more to add.
Agents
Role of agents
Stories are made out of agents, whereas systems are made out of the math and physics which is intuitive to us. Systems occasionally model agents, but in system thought agents are a pretty complex, obscure thing for a system to have. In story thought we expect everything to be intentional.
Perspective
Stories are usually from an agent’s perspective, systems are understood from an objective outside viewpoint. Even if a story doesn’t have a narrator, there is usually a protagonist or several, plus less detailed characters stretching off into the distance.
Unique identity
The agents that stories are made of always have unique identities, even if there is more than one with basically the same characteristics. In system thought units are interchangeable, except they may have varying quantities of generic parameters. ‘You’ are a set of preferences, a gender, an income level, a location, and some other things. In story thought, any ambiguity about whether someone is the same person as they used to be is a big issue, and the whole story is about working out a definitive answer. In system thought it’s a meaningless question.
Good, evil and indifference
Ought and is
Story thought is concerned largely with judging the virtue of things, whereas system thought is mostly concerned with what happens. Stories are full of good and evil characters and actions, duties, desires, and normative messages. If system thought is used for thinking about ‘ought’ questions, this is done by choosing a parameter to care about and simply maximizing it, or choosing a particular goal, such as for a car to work. In story thought goodness doesn’t relate to quantities of anything in particular and you don’t ponder it by adding up anything. People who want to think about human interactions in terms of systems sometimes get around this by calling anything humans like ‘utility’, then adding that up. This irritates people who don’t want to think of stories in system terms.
Motives
In stories, intentions tend to be strongly related to inherent goodness or evilness. If they are not intentionally directed at big good or evil goals, they are meant to be understood as strong signals about the person’s character. Systems don’t have an analog.
Meaning
Overarching meaning
Stories often have an overall moral or a point. That is, a story as a whole tends to contain a normative message for the observer. Systems don’t.
Other meanings and symbolism
Further meaning can be read into both stories and systems. However in stories this is based on superficial similarity and is intended to say something important, whereas in systems it’s based on structural similarity, is not intended, and may not be important. If you see a black cat cross your path, story thought says further dark things may cross your metaphoric path, while system thought might say animals in general can probably cross many approximately flat surfaces.
Mechanics
No levels below social
In stories everything occurs because of social level dynamics. Lower levels of abstraction such as physics and chemistry can’t instigate events. In reality it would be absurd to think a coffee fell on your lap so that you would have an awkward encounter with your future lover ten minutes later, but in story thought it would be absurd for a coffee to fall on your lap because it caught your sleeve. Even events that weren’t supposedly intended by any characters are for a social level purpose. Curiously the phrase ‘everything happens for a reason’ is used to talk about systems and stories, but the ‘reasons’ are in opposite temporal directions. In system thought it means everything is necessitated somehow by the previous state of the system, in story thought it means every occurrence will have future social significance if it does not already.
Is and ought interaction
If a system contains a parameter you care about, the fact you care about it doesn’t affect how the system works. In story thought you can expect how you treat your servant on a single occasion to influence whether you happen to run into the heroine half naked in several months.
Free will
Stories are full of people making ‘free’ choices, not determined by their characteristics yet somehow determined by them. System thought doesn’t know how to capture this incoherence to the satisfaction of story thought.
Opportunity costs and other indirect causation
In story thought the causation we notice runs in the idiosyncratic way we understand blame to do. If I cause you to do something by allowing you, and you do it badly, I did not cause it to happen badly. In an analogous system, we do say that if a rock lands on a roof, and the roof doesn’t hold the rock well, the collapse was partly caused by the rock’s landing place.
Story causation also doesn’t include opportunity costs much, unless they are intentional: I didn’t cause Africans to suffer horribly this year by buying movie tickets instead of paying to deworm them, and nor did all of the similarly neglectful story heroes ever. In an analogous system, oxygen reacting with hydrogen quite obviously causes less oxygen to remain to react later with anything else.
Probability
The main components of a story need only be plausible, they need not be likely. Story thought notices if the hero is happy when his girlfriend dies, but doesn’t mind much if he happens to find himself in a situation central to the future of his planet. System thought on the other hand is mostly disinterested with the extremes of possibility, and more concerned with normal behavior. Nobody cares much if it’s possible that your spending a dollar will somehow lead to the economy crashing.
This is probably to do with free will being a big part of stories. Things only need to be possible for someone with free will to do them. To ask why a character happens to be right and good when everyone else isn’t is a strange question to story thought. He’s good and right because he wants to be, and they all don’t want to be. Specific characters are to blame.
Time
In stories events tend to unfold in sequence, whereas they can occur in parallel in systems, or there might not be time.
Adeptness of our minds
Story thought is automatic, easy, compelling, and fun. System thought is harder and less compelling if it contradicts story thought. It can be fun, but often isn’t.
Robin has claimed a fewtimes that law is humans’ best bet for protecting ourselves from super-intelligent robots. This seemed unlikely to me, and he didn’t offer much explanation. I figured laws would protect us while AI was about as intellectually weak as us, but not if when it was far more powerful. I’ve changed my mind somewhat though, so let me explain.
When is it efficient to kill humans?
At first glance, it looks like creatures with the power to take humans’ property would do so if the value of the property minus the cost of stealing it was greater than the value of anything the human might produce with it. When AI is so cheap and efficient that the human will be replaced immediately, and the replacement will use resources enough better to make up for the costs of stealing and replacement, the human is better dead. This might be soon after humans are overtaken. However such reasoning is really imagining one powerful AI’s dealings with one person, then assuming that generalizes to many of each. Does it?
What does law do?
In a group of agents where none is more powerful than the rest combined, and there is no law, basically the strongest coalition of agents gets to do what they want, including stealing others’ property. There is an ongoing cost of conflict, so overall the group would do better if they could avoid this situation, but those with power at a given time benefits from stealing, so it goes on. Law basically lets everyone escape the dynamic of groups dominating one another (or some of it) by everyone in a very large group pre-committing to take the side of whoever is being dominated in smaller conflicts. Now wherever the strong try to dominate the weak, the super-strong awaits to crush the strong. Continue reading →
This is Katja Grace’s blog. It is about the idiosyncratic class of things Katja considers to be on the frontier of important and interesting. Empirically, it tends to be about human behavior, social institutions and rules, anthropic reasoning, personal experimentation and improvement, philanthropy, and the prospect of machines becoming as interesting as humans. Katja is responsible for omissions as well as actions, and aspires to save the world at some point.
If you like this blog, but wish it was about mundane perspectives and ‘travel’ instead of matters of lasting importance, try Worldly Positions. If you want both of those things and more, with fewer errors from automatic crossposting, see world spirit sock puppet. (Everything here is now crossposted from there.)