Softer, easier, less technical subjects: the ones about algorithmically sophisticated self-replicating nano-machinery-based robots with human-level intelligence that were constructed using selection effects, and their elaborate game theoretic interactions. e.g. sociology, economics, psychology, biology.
Harder, more difficult, more technical subjects: the ones about numbers, shapes, simple substances, rocks, making and moving macro-objects, algorithms. e.g. math, physics, chemistry, geology, engineering, computer science.
Why are the easy subjects about super-complicated, hard to understand things and the hard subjects about relatively simple things?
The first theory that comes to mind (perhaps because I’ve heard it before) is that the ‘easy’ subjects are just too hard. Nobody can get anywhere in them, which does two things. It means those subjects don’t accrue any hard-to-learn infrastructure of concepts and theories. And it completely undermines their use as a costly signal of ability to get somewhere in a subject. This leaves these subjects disproportionately popular among people who wouldn’t have been able to send that particular signal in any case, and empty of difficult concepts and theories. Worse, once the capable people leave, the body of useful science grows even more slowly and interest in the subject becomes a worse signal of competence.
Or less cynically, the capable people reasonably go to subjects that are feasible to make progress on, where they can contribute social value.
At any rate, the easy subjects are seen as hard because they have more sophisticated science, and are full of impressive people. They are hard to play at a socially acceptable level, because the frontier is more sophisticated and the competition is stiff.
On this theory, in ancient times rocket science was probably left up to the least capable members of the tribe, while pointy stick science was the place for impressive technical expertise. Which sounds pretty plausible to me.
I’m not sure if this theory really makes sense of the evidence. The kinds of subjects that are too hopeless for a capable person to perceptibly outperform a fool in are the ones like ‘detailed turbulence prediction’. People do actually make progress in soft sciences, and it would be surprising to me if those people were not disproportionately capable. It might be that the characteristic scale of progress is smaller relative to the characteristic scale of noise, so a capable person can less surely show their virtue. But it is less clear that that generally aligns with subjects being harder. For instance, if you need a certain level of (skill + luck) to find breakthroughs, and breakthroughs become harder to find, then more skilled people would at least sometimes be at an advantage.
Another explanation is that everyone feels like they understand subjects relating to humans much more than they feel like they understand physics, because (as humans) human-related things come up a lot for them, so they have relevant intuitions and concepts. These intuitions may or may not constitute high quality theories, and these concepts may or may not be the most useful. However they do make soft subjects look simple and feel understandable.
I have heard this theory before, but I think mostly as an explanation by social scientists for why people are annoying. If it also explains why the hard sciences are easy, that would nicely simplify things.
Are there other good theories?
Introspection. We know how we are from the inside, and we extrapolate to everyone else. While rough and problematic if done without accounting for the differences between one and the subject of analysis, it serves as a very good estimate.
For the other cases, it’s easier to create analogies that we feel more comfortable playing with. The harder sciences tend to be more fundamental, and by their nature, get more abstract.
Krebs cycle in biology is really complicated, but we can imagine it as a chain of ‘things’ that feed into other things, with some components added and removed at certain points, and energy being the output.
Jeffrey Conditionalisation in Bayesian Epistemology can be easily written, and one can get why one would use it, but it’s hard to import our metaphor for probability (a region of a larger area) to it, and so we can’t understand it easily. Even Bayes Theorem itself is hard to get with the ground metaphor for probability.
Of course, different people have different capabilities for building intuitions and concepts on top of our starting toolbox. Mathematicians are probably able to ‘see’ things in a way I can’t without proper training, and the same for me as an engineer.
“If it also explains why the hard sciences are easy, that would nicely simplify things.”
For us they may be, but not for most people (there’s a correlation between hardness of the subject and IQ). Unless you meant are (about) easy (subjects) in which case my explanation above fits.
If you and I run a foot race next to each other down a straight track, starting at the same time at one end, it is easy to see who is better. But if we each run in different directions over complex terrain, it is harder to compare us. So we often prefer simple arenas for showing off our relative ability. Physics is such a simple arena, while social science is more like the complex hills where we each run in different directions.
The more immediately useful the subject, and the more non-specialists care, the more incentive there is to optimize your message for the non-specialists rather than for the specialists. This incentive leads to much less progress. Other than individual ego gratification there is little motive to lie about physics results, while there is enormous motive to lie in medicine, economics, public policy, etc.
Are we holding ourselves to the same standard of competence in the “soft” and “hard” sciences?
Maybe the “soft” sciences seem “easier” precisely because they’re actually harder: the ceiling of what’s humanly possible is lower. Maybe I’m “just as good” at intuitively predicting whether a bridge will support a heavy truck as I am at predicting how an audience will react to a speech. But I *think* I’m worse at the truck problem because I know that it’s humanly possible to be so much better than I am. But the speech problem, on the other hand, is so hard that I’m already approximately as good as anyone could be.
But I think that Artir’s point is the most important one. I can “predict” how people will react to a speech by just imagining hearing it myself. In some sense, I’m not predicting at all. I just run the experiment at say, “that’s what happens when you do that.” It would be as if I had near-copies of the truck and the bridge in my head and I could just run the truck over the bridge and see whether it held up.
(Now that I think of it, this is straight out of a Calvin & Hobbes comic.)
The “simple” subjects give you more opportunity to screen off confounding variables, which let’s you perform definitive tests. This means the theories in the “simple” fields are much more exposed falsification by the world. In the squishier sciences allow no hope of screening off confounding variables, which leaves many more opportunities for two things: 1. False positive measurements that were caused not by a real effect, but by a failure to control for confounders – there are so many! – and 2. holding on to an empirically falsified theory because you ascribe the source of the falsification to badly controlled-for confounders instead of a real effect.
All this means that in the complex sciences, there is much more room for the massaging and interpretation of data, so winning a scientific debate has relatively more to do with your political salesmanship and less with signals from reality. It’s one reason why the complex sciences have revolutions at their core, and even fundamental things are hotly debated. And it’s not just human stuff. Climate science, because it suffers from the same excess of variables, also has these problems when it tries to defend even broad causal claims.
I think the answer is more historical and less philosophical. This perception is mostly only true within the European tradition, and then only in the past few millenia. It’s perpetuated today by sexism, though it didn’t appear to start out quite so solidly that way. (Native American cultures, Indian culture, etc. appear to not have made this distinction. Unsurprisingly, their medical understanding seems to have lead ours by the same time frame quoted above – e.g. regarding medicines, understanding of disease, etc).