This argument seems common to many debates:
‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.
This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.
Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.
A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’
In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.
I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely. Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.
Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.
Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?
The big problem of futarchy isn’t the difficulty of estimation; it’s that, once decisions of real consequence are based off of prediction markets, those markets can be expected to stop predicting and start selling policies to the highest bidder.
But yes, you’re right about the ridiculous frequency of that type of argument. Whenever I hear it, I pretty much tune the person out and don’t trust another word that person says, ever.
[lots of stuff]
Many (not all) non-utilitarians would argue that their moral strategies are less random with regard to the goal than even our best effort at utilitarian accuracy, and that this is the whole point of non-utilitarian strategies. So the real criticism in this case is:
Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. However, Proposal Q achieves the goal of P, but uses the more practical measure Y, which is a strong correlate of X. Therefore we shouldn’t do P, and should do Q.
Not necessarily. If your goal is to maximise X, a strategy which blindly maximises X might easily be better than one which involves estimating it badly then acting on that estimate.
ideas we find useful…”Information is expensive.”….”Whatever can be measured is usually not important.”….we have been studying the visual system and were interested to hear that our sense of the world as logarithmic and the maths that spring from it is due to the need for our visual system to stereoscopically see objects in dense forest, at night….like bugs…to eat…thus depth perception and perspective…the world is only sort of logarithmic…basically we propose that any complex/interesting/valuable experience is way outside the ability of a any brain to describe accurately let alone understand/model/effect….even with the help of a gazallion computers…if only…..prob would just make it worser…
I think the argument does have some place when X will remain impractical to measure even after proposal P has been carried out. The objection at this point becomes that the hypothesis ‘P is effective’ is nonfalsifiable, in a practical or perhaps even epistemological sense.
Small point regarding utilitarianism: If the goal is maximization of utility and in principle costly measurement of utility makes utility lower than application of moral principle X, then in fact the argument applies.
But that’s only in the case that the original argument is modified thus:
Pingback: Why I am not an Estimator « Why I am not…
Pingback: Cost-effectiveness and scale: cause area and intervention | The Global Priorities Project
>A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’
Now, that’s a strawman https://en.wikipedia.org/wiki/Straw_man