I like Scott’s post on what LessWrong has learned in its lifetime. In general I approve of looking back at your past misunderstandings and errors, and trying to figure out what you did wrong. This is often very hard, because it’s hard to remember or imagine what absurd thoughts (or absences of thought) could have produced your past misunderstandings. I think this is especially because nonsensical confusions and oversights tend to be less well-formed, and thus less organizable or memorable than e.g. coherent statements are.
In the spirit of understanding past errors, here is a list of errors which I think spring from a common meta-error. Some are mentioned in Scott’s post, some were mine, some are others’ (especially those who are a combination of smart and naive I think), a few are hypothetical:
- Because I believe agent-like behavior is obviously better than randomish reactions, I assume I am an agent (debunked!).
- Because I think it is good to be sad about the third world, and not good to be sad about not having enough vitamin B, I assume the former is why I am sad.
- Because I explicitly feel that racism is bad, I am presumably not racist.
- Because my mind contains a line of reasoning that suggests I should not update much against my own capabilities because I am female, presumably I do no such thing.
- Because I have formulated this argument that it is optimal for me to think about X-risks, I assume I am motivated to (also debunked on LW).
- Because I follow and endorse arguments against moral realism, and infer that on reflection I prefer to be a consequentialist, I assume I don’t have any strong moral feelings about incest.
- Because I have received sufficient evidence that I should believe Y, I presumably believe Y now.
- I don’t believe Y, and the only reason I endorse to not believe things is that you haven’t got enough evidence for them, therefore I must not have enough evidence to believe Y.
- Because I don’t understand the social role of Christmas, I presume I don’t enjoy it (note that this is a terrible failing of the outside view: none of those people merrily opening their presents understands the social role either).
- Because I don’t endorse the social role of drinking, I assume I don’t enjoy it.
- Because signaling sounds bad to me, I assume I don’t do it, or at least not as much as others.
- Because I know the cost of standing up is small (it must be, it’s so brief and painless!), this cannot be a substantial obstacle to going for a run (debunked!).
- I know good motives are better than bad motives, so presumably I’m motivated by good motives (unlike the bad people, who are presumably confused over whether good things are the things you should choose)
- I have determined that polyamory is a good idea and babies are a bad idea, therefore I don’t expect to feel jealousy or any inclination to procreate, in my relationships.
In general, I think the meta problem is failing to distinguish between endorsing a mental characteristic and having that characteristic. Not erroneously believing that the two are closely related, but actually just failing to notice there are two things that might not be the same.
It seems harder to make the same kind of errors with non-mental characteristics. Somehow it’s more obvious to people that saying you shouldn’t smoke is not the same as not smoking.
With mental characteristics however, you don’t know how your brain works much at all, and it’s not obvious what your beliefs and feelings are exactly. And your brain does produce explicit endorsements, so perhaps it is easy to identify those with the mental characteristics that the endorsements are closely related to. Note that explicitly recognizing this meta-error is different from it being integrated into your understanding.
I like this post and so I am ))
We will have a short conference on X-risks in SF on 14 June if you are interested
I like this post a lot, but I wish you had called the error “I should, therefore I do” instead of “I like, therefore I am”, the latter of which makes no sense.
If you had, you would have realized that it’s a special case of the even more general error of conflating the way things should be with the way things are.
The strongly recognized error of “belief in belief” springs to mind. Not so strongly recognized in practice, it would appear.
“Because I follow and endorse arguments against moral realism, and infer that on reflection I prefer to be a consequentialist, I assume I don’t have any strong moral feelings about incest.”
This is interesting. I’m in favour of incest remaining illegal – it’s a nice bright line for various things I like – and I experience an instinctive “squick” aversion to it in my *own* case, of course, on those rare occasions I trigger it.
But I’ve never felt a strong moral feeling about incest. At all. So I’m really interested by reports of this. Could you tell us more about how you realized-slash-noticed this happening?
Pingback: Misunderstandings of not understanding | Meteuphoric
Pingback: One misuse of the efficient Katja hypothesis | Meteuphoric