Robin wonders (in conversation) why apparently fairly abstract topics don’t get more attention, given the general trend he notices toward more abstract things being higher status. In particular, many topics we and our friends are interested in seem fairly abstract, and yet we feel like they are neglected: the questions of effective altruism, futurism in the general style of FHI, the rationality and practical philosophy of LessWrong, and the fundamental patterns of human behavior which interest Robin. These are not as abstract as mathematics, but they are quite abstract for analyses of the topics they discuss. Robin wants to know why they aren’t thus more popular.
I’m not convinced that more abstract things are more statusful in general, or that it would be surprising if such a trend were fairly imprecise. However supposing they are and it was, here is an explanation for why some especially abstract things seem silly. It might be interesting anyway.
Lemma 1: Rethinking common concepts, and being more abstract tend to go together. For instance, if you want to question the concept ‘cheesecake’ you will tend to do this by developing some more formal analysis of cake characteristics, and showing that ‘cheesecake’ doesn’t line up with the more cutting-nature-at-the-joints distinctions. Then you will introduce another concept which is close to cheesecake, but more useful. This will be one of the more abstract analyses of cheesecakes that has occurred.
Lemma 2: Rethinking common concepts and questioning basic assumptions look pretty similar. If you say ‘I don’t think cheesecake is a useful concept – but this is a prime example of a squishcake’, it sounds a lot like ‘I don’t believe that cheesecakes exist, and I insist on believing in some kind of imaginary squishcake’.
Lemma 3: Questioning basic assumptions is also often done fairly abstractly. This is probably because the more conceptual machinery you use, the more arguments you can make. e.g. many arguments you can make against the repugnant conclusion’s repugnance work better once you have established that aversion to such a scenario is one of a small number of mutually contradictory claims, and have some theory of moral intuitions as evidence. There are a few that just involve pointing out that the people are happy and so on, but where there are a lot of easy non-technical arguments to make against a thing, it’s not generally a basic assumption.
Explanation: Abstract rethinking of common concepts is easily mistaken for questioning basic assumptions. Abstract questioning of basic assumptions really is questioning basic assumptions. And questioning basic assumptions has a strong surface resemblance to not knowing about basic truths, or at least not having a strong gut feeling that they are true.
Not knowing about basic truths is not only a defining characteristic of silly people, but also one of the more hilarious of their many hilarious characteristics. Thus I suspect that when you say ‘I have been thinking about whether we should use three truth values: true, false, and both true and false’, it sounds a lot like ‘My research investigates whether false things are true’, which sounds like ‘I’m yet to discover that truth and falsity are mutually exclusive opposites’, which sounds a bit like ‘I’m just going to go online and check whether China is a real place’.
Some evidence to support this: when we discussed paraconsistent logic at school, it was pretty funny. If I recall, mostly of the humor took the form ‘Priest argues that bla bla bla is true of his system’ …’Yeah, but he doesn’t say whether it’s false, so I’m not sure if we should rely on it’. I feel like the premise was that Priest had some absurdly destructive misunderstanding of concepts, such that none of his statements could be trusted.
Further evidence: I feel like some part of my brain interprets ‘my research focuses on determining whether probability theory is a good normative account of rational belief’ as something like ‘I’m unsure about the answers to questions like ‘what is 50%/(50% + 25%)?”. And that part of my brain is quick to jump in and point out that this is a stupid thing to wonder about, and it totally knows the answers to questions like that.
Other things that I think may sound similar:
- ‘my research focusses on whether not being born is as bad as dying’ <—> ‘I’m some kind of socially isolated sociopath, and don’t realize that death is really bad’
- ‘We are trying to develop a model of rational behavior that accounts for the Allais paradox’ <—> ‘we can’t calculate expected utility’
- ‘Probability and value are not useful concepts, and we should talk about decisions only’ <—> ‘My alien experience of the world does not prominently feature probabilities and values’
- ‘I am concerned about akrasia’ <—> ‘I’m unaware that agents are supposed to do stuff they want to do’
- ‘I think the human mind might be made of something like sub-agents’ <—> ‘I’m not familiar with the usual distinction of people from one another’.
- ‘I think we should give to the most cost-effective charities instead of the ones we feel most strongly for’ <—> ‘Feelings…what are they?’
I’m not especially confident in this. It just seems a bit interesting.
I think you’re on target. Those who do talk about abstractions are countersignalling that they have sufficient intellectual status not to have to worry about being taken for someone stupid and confused.
There’s another part to the explanation. Where being popular increasingly means being part of a conversation (that is, social media, etc., which try to mimic oral exchanges), the topics will tend to be more near-mode because personal conversation is. Near-mode isn’t compatible with the detachment and distance required for discussing abstract matters. (For discussion of this, but with regard to style rather than content, see “Clear and Simple as the Truth” reinterpreted through construal-level theory.” — http://tinyurl.com/cdzotb4 )
Most people have a “if it ain’t broken, don’t fix it” attitude towards these assumptions. They don’t see anything problematic with such assumptions so they avoid discussions questioning them. You look bad for engaging in such discussions because these same people don’t understand what you have to gain from them either. They may think you’re a smart contrarian who questions basic assumptions for useful reasons that they don’t (yet) understand themselves. But that normally happens when you’re already higher status than they are. Usually they’ll conclude you’re making a try-hard attempt at drawing attention to yourself or signaling intelligence. Or they’ll just categorize you as “weird” and not analyze your behavior further.
Pingback: Overcoming Bias : Status Bid Coalitions
One explanation for the apparent paradox at the beginning is that Effective Altruism and futurism and stuff just aren’t particularly abstract questions, they’re really down to earth and practical and so boring.
And/or they rely on practical (non-basic) assumptions that people just assume are untrue (e.g. charity doesn’t really work anyway; working for a bank would do more harm than donating money could) and so they’re especially boring, in a boring way.
A particularly appealing candidate for the second kind of explanation (abstractness per se isn’t high status), is that only certain kinds of profound sounding abstraction (obscurity) are high status. e.g. “Analytic philosophy distinguishes between obscurity and technicality. …This enrages some of its enemies. Wanting philosophy to be at once profound and accessible, they resent technicality but are comforted by obscurity.” (Bernard Williams)
As to your hypothesis, I’m not sure about the assumptions about the link between abstraction and rethinking common concepts. I see no reason to think the connection is anything other than very weak: maybe rethinking common concepts requires a degree of abstraction, but many abstract discussions don’t seem to require anything much like questioning basic truths (especially not in a way that looks like simply lacking knowledge of basic truths).
There may well be something to the idea that asking questions of a certain kind about human behaviour looks unappealing, but there seem many readily available explanations for this aside from the ‘abstraction looks like ignorance of basic knowledge’ hypothesis: e.g. analysing things in terms of self-interest is too “cynical”; positing heuristics or calculations underlying decision-making looks too much like you are suggesting people consciously reason through those considerations; giving any kind of simplifying explanation looks reductive; talking about special cases for looks like wild over-generalisation etc.
In my experience, it seems that most successful people outside of academia implicitly disbelieve in probability and value, and only take talk of decisions (actionable options) as normative. I think that probability theory is a good normative account of rational belief, but not a good descriptive account of human confidence, which seems IMHO, to break things down not between objective views of the world but rather between treating the world as the object and itself as subject and treating the world as subject and itself as object.
Pingback: Evidence on why abstract research is or isn’t respected | Meteuphoric