Category Archives: 1

Updated blog summary

I have updated the summary of every post on my blog, to include all the posts since last time, more than a year ago.

Epistemology of evilness

Most everyone seems to think that a big reason for bad things happening in the world is that some people are bad. Yet I almost never see advice for telling whether you yourself are a bad person, or for what to do about it if you seem to be one. If there are so many bad people, isn’t there a very real risk that you are one of them?

Perhaps the model is one where you automatically know whether you are good or bad, and simply choose which to be. So the only people who are bad are those who want to be bad, and know that they are bad. But then if there is this big population of bad people out there who want to be bad, why is so little of the media devoted to their interests? There’s plenty on how to do all the good things that a good person would want to do, such as voting for the benefit of society, looking after your children, buying gifts, expressing gratitude to friends, holding a respectable dinner, pleasing your partner. Yet so little on scamming the elderly, effectively shaking off useless relatives, lying credibly, making money from investments that others are too squeamish to take, hiding bodies. Are the profit-driven corporate media missing out on a huge opportunity?

If there aren’t a whole lot of knowingly bad people out there who want to be bad, and could use some information and encouragement, then either there aren’t bad people at all, or bad people don’t know that they are bad or don’t want to be bad. The former seems unlikely, by most meanings of ‘bad’. If the latter is true, why are people so blase about the possibility that they themselves might be bad?

***

Prompted by the excellent book Harry Potter and the Methods of Rationality, in which there is much talk of avoiding becoming ‘dark’, in stark contrast to the world that I’m familiar with. If you enjoy talking about HPMOR, and live close to Pittsburgh, come to the next Pittsburgh Less Wrong Meetup.

Value realism

People have different ideas about how valuable things are. Before I was about fifteen the meaning of this was ambiguous. I think I assumed that a tree for instance has some inherent value, and that when one person wants to cut it down and another wants to protect it, they both have messy estimates of what its true value is. At least one of them had to be wrong. This was understandable because value was vague or hard to get at or something.

In my year 11 Environmental Science class it finally clicked that there wasn’t anything more to value than those ‘estimates’.  That a tree has some value to an environmentalist, and a different value to a clearfelling proponent. That it doesn’t have a real objective value somewhere inside it. Not even a vague or hard to know value that is estimated by different people’s ‘opinions’. That there is just nothing there. That even if there is something there, there is no way for me to know about it, so the values I deal with every day can’t be that sort. Value had to be a function of things: the item being valued and the person doing the valuing.

I was somewhat embarrassed to have ever assumed otherwise, and didn’t really think about it again until recently.  It occurred to me recently that a long list of strange things I notice people believing can be explained by the assumption that they disagree with me on whether things have objective values. So I hypothesize that many people believe that value is inherent in a thing, and doesn’t intrinsically depend on the agent doing the valuing.

Here’s my list of strange things people seem to believe. For each I give two explanations: why it is false, and why it is true if you believe in objective values. Note that these are generally beliefs that cause substantial harm:

When two people trade, one of them is almost certainly losing

People don’t necessarily say this explicitly, but often seem to implicitly believe it.

Why it’s false: In most cases where two people are willing to trade, this is because the values they assign to the items in question are such that both will gain by having the other person’s item instead of their own.

Why it’s believed: There’s a total amount of value shared somehow between the people’s posessions. Changing the distribution is very likely to harm one party or the other. It follows that people who engage in trade are suspicious, since trades must be mostly characterized by one party exploiting or fooling another.

Trade is often exploitative

Why it’s false: Assume exploiting someone implies making their life worse on net. Then in the cases where trade is exploitative, the exploited party will decline to participate, unless they don’t realize they are being exploited. Probably people sometimes don’t realize they are being exploited, but one is unlikely to persist in doing a job which makes one’s life actively worse for long without noticing. Free choice is a filter: it causes people who would benefit from an activity to do it while people who would not benefit do not.

Why it’s believed: If a person is desperate he might sell his labor for instance at a price below its true value. Since he is forced by circumstance to trade something more valuable for something less valuable, he is effectively robbed.

Prostitution etc should be prevented, because most people wouldn’t want to do it freely, so it must be pushed on those who do it:

Why it’s false: Again, free choice is a filter. The people who choose to do these things presumably find them better than their alternatives.

Why it’s believed: If most people wouldn’t be prostitutes, it follows that it is probably quite bad. If a small number of people do want to be prostitutes, they are probably wrong. The alternative is that they are correct, and the rest of society is wrong. It is less likely that a small number of people is correct than a large number. Since these people are wrong, and their being wrong will harm them (most people would really hate to be prostitutes), it is good to prevent them acting on either their false value estimates.

If being forced to do X is dreadful, X shouldn’t be allowed:

Why it’s false: Again, choice is a filter. For an arbitrary person doing X, it might terrible, but it is still often good for people who want it. Plus, being forced to do a thing often decreases its value.

Why it’s believed: Very similar to above. The value of X remains the same, regardless of who is thinking about it, whether they are forced to do it. That a person would choose to do a thing others are horrified to have pressed on them, that just indicates that the person is mentally dysfunctional in some way.

Being rich indicates that you are evil:

Why it’s false: On a simple model, most trades benefit both parties, so being rich indicates that you have contributed to others receiving a large amount of value.

Why it’s believed: On a value realism model, in every trade, someone wins and someone loses, anyone who has won at trading so many times is evidently an untrustworthy and manipulative character.

Poor countries are poor because rich countries are rich:

Why it’s false: In some sense it’s true—the rich countries don’t altruistically send a lot of aid into the poor countries. Beyond that there’s no obvious connection.

Why it’s believed: There’s a total amount of value to be had in the world. The poor can’t become richer without the rich giving up some value.

The primary result of promotion of products is that people buy things they don’t really want:

Why it’s not obviously true: The value of products depends on how people feel about them, so it is possible to create value by changing how people feel about products.

Why it’s believed: Products have a fixed value. Changing your perception of this in the direction of you buying more of them is cheatful sophistry.

***

Questions:

Is my hypothesis right? Do you think of value as a one or two place function? (Or more?) Which of the above beliefs do you hold? Are there legitimate or respectable cases for value realism out there? (Moral realism is arguably a subset).

Too obvious to say

I’m in favor of living for an indefinitely long time. Pointing this out seems similar to pointing out that I’m in favor of not putting my hands in blenders while they are running. Same goes for ‘there probably isn’t a God’, ‘freezing one’s head is a good idea (under certain circumstances)’, and a lot of the other apparently controversial topics. I rarely state these opinions unless asked because it’s embarrassing to point out obvious things. If there seemed to be a sincere discussion of whether forty nine is the square of seven, I’d be embarrassed to join it, despite my strong views on the topic.

From the perspective of someone who’s not sure whether life extension is a good idea, I look like I don’t have a strong opinion. They see a small number of people who visibly like it, and a small number who visibly don’t. Yet if most people behave like me in the above respect, almost everyone they don’t hear from could be one one side or the other, and it would look the same.

Do many people act similarly to me in this regard? I’m not sure. Why would saying obvious things be embarrassing? It suggests that you don’t think they are obvious. So if you belong to a social group where it is embarrassing to believe X, all things equal I’d expect it to be embarrassing to point out ‘not X’. But some social groups are defined by debating issues that they claim to be very confident about one way or the other. So something else is going on too. For instance members of a pro-life group don’t seem to signal any uncertainty about the issue to other members by engaging pro-choice people.

This could be a matter of how the other side is behaving. If I went out and found the people arguing about 49, and joined in, that would look worse than me pitching in if I were just sitting at home and my housemates got into an argument about it. In the first case it would be embarrassing in front of my current friends, but if I got so involved as to make new friends with the pro side in the 49 debate, I guess it would be less embarrassing in front of them. So maybe people who have strong views, but are around people with other views still find it ok to say the others are wrong, while those who only spend time with likeminded folks more likely feel silly claiming that the other side is wrong. Notice that claiming the other side is wrong is different to assuming the other side is wrong, and mocking them about it. Everyone can do that. If this model is right, and people mostly spend time with people who are near them on the spectrum of various opinions, we would still get an effect like the one illustrated above. I don’t know if this is true. What do you think?

Explanations of mathematical explanation

I recently read Mathematical Explanation [gated], by Mark Steiner (1978). My summary follows, and my commentary follows that. I am aware that others have written things since 1978 on this topic, but I don’t have time to read them right now.

***

We seem to think there is a distinction between explaining a mathematical fact and merely demonstrating it to be the case. We have proofs that do both things, and perhaps a sliding scale of explanatoriness between them. One big question then is what makes a proof actually explain the thing it proves? Or at least what makes it seem that way to us?

One suggestion has been the level of generality or abstractness. Perhaps if we show a particular fact follows from some much bigger theory, the fact feels more explained. But then consider this fact:

1+2+3+…+n = n(n+1)/2

There is an inductive proof of this:

S(1) = 1(1+1)/2 = 1

S(n+1) = S(n) + (n + 1) = n(n+1)/2 + 2(n+1)/2 = (n + 1)(n+2)/2

This is not taken to be very explanatory. Whereas this is:

O O O O O
O O O O O
O O O O O 
O O O O

[the black circles make a triangle of 1+2+3+4. Any such triangle can be made into a rectangle of area n x (n+1) with another identical triangle. So the triangle is half of n(n+1).]

It seems the latter is if anything less general, yet it seems a much better explanation (I remember learning it this way as a preteen in book about fun math magic). There are other examples.

This case and others, suggest being able to visualize a proof is key to its seeming to be an explanation. Steiner discards this immediately as being too subjective, and claims there are also counterexamples.

He also quickly dismisses a third hypothesis that others have forwarded: that a proof is explanatory if it could have been used to discover the fact, rather than just to verify it. His counterexample is the Eulerian identity, which I shan’t go into here. I take it this hypothesis isn’t very plausible anyway, since often we discover a fact first then hope to explain it better.

Steiner offers his own theory: that a proof is explanatory if it makes use of a ‘characterizing property’ of an entity that is mentioned in the theorem. ‘Characterizing properties’ characterize an entity relative to other entities in some similar family. For instance, 18 might be characterized as 2*3*3, since other numbers don’t have that property. 18 might also be characterized as being one more than 17, or in a huge number of other ways.

If I understand, the idea is that if we are clear on how a result depends on a particular characterizing property, we will feel that the result has been explained. If we don’t see how something unique about the entities in question ‘caused’ the outcome, the outcome seems arbitrary. He explains further that this means we can see that if we change the properties of the entity, perhaps swapping out 18 for 20, we would get a different result.

Steiner explains how the many proofs he has presented that we have considered explanatory do in fact depend on characterizing properties, thus considers his theory to be quite supported.

Perhaps I misunderstand this notion of ‘characterizing properties’. It seems to me that of course all proofs depend on properties specific to the entities they are about (relative to whatever entities the proof is not about). So to distinguish the explanatory proofs, Steiner needs a narrower notion of a characterizing property. For instance, a property that is particularly saliently related to the entity in question. Or he needs to claim that explanatoriness requires the observer to actually notice or understand the connection between the explanatory property and the outcome. In which case the explanatoriness of a proof would be a function of the observer’s psychology as well as the proof. Any proof would be perfectly explanatory if the reader followed it carefully enough.

At any rate, he doesn’t seem to be thinking of either of those things (though again I may be misunderstanding just what he is claiming at the end here). He rather claims that the various proofs he examines do in fact rely on properties that characterize the entities involved. The class seemed to agree with me here.

My tentative theory of when we feel something has been explained, which goes for scientific explanations as well as mathematical ones, is as follows. We feel like we understand a bunch of things that we are very familiar with: chunks of matter moving through space and knocking into each other, liquids, shapes, basic agenthood, that sort of thing.

Anything that happens that only involves these things acting in their usual ways doesn’t feel like it needs any extra explanation. It is obvious. To ‘explain’ less familiar things, we can do one of two things. We can frame them in terms of something we already intuitively grasp in the above way. This is what is usually called an explanation. For instance we can think of electricity as being like water, or of the first n integers as being like bits of a triangle. Or of the mysterious murder being like a waitress putting poison in the soup. Alternatively we can just keep interacting with the entity in question until we become familiar with it’s properties, and then we think them obvious and not requiring explanation. For instance I no longer feel like I need an explanation for x^2 making a parabola shape, because I’m so familiar with it.

This arguably fits with many of the characteristics we have noted are associated with explanatoriness. Instances of generalizations that we understand feel explanatory. Pictures tend to be explanatory, especially diagrams with simple shapes. We feel like we could have discovered a thing ourselves if it follows from behavior of entities we can manipulate intuitively.

While this seems to me a decent characterization of what feels explanatory, I can’t see that it is a particularly useful category outside of psychology, for instance for use in saying what it is that science is meant to be doing. Something like unification seems more apt there, but that’s a topic for another time.