As I understand them, the social rules for interacting with people you disagree with are like this:
- You should argue with people who are a bit wrong
- You should refuse to argue with people who are very wrong, because it makes them seem more plausibly right to onlookers
I think this has some downsides.
Suppose there is some incredibly terrible view, V. It is not an obscure view: suppose it is one of those things that most people believed two hundred years ago, but that is now considered completely unacceptable.
New humans are born and grow up. They are never acquainted with any good arguments for rejecting V, because nobody ever explains in public why it is wrong. They just say that it is unacceptable, and you would have to be a complete loser who is also the Devil to not see that.
Since it took the whole of humanity thousands of years to reject V, even if these new humans are especially smart and moral, they probably do not each have the resources to personally out-reason the whole of civilization for thousands of years. So some of them reject V anyway, because they do whatever society around them says is good person behavior. But some of the ones who rely more on their own assessment of arguments do not.
This is bad, not just because it leads to an unnecessarily high rate of people believing V, but because the very people who usually help get us out of believing stupid things – the ones who think about issues, and interrogate the arguments, instead of adopting whatever views they are handed – are being deprived of the evidence that would let them believe even the good things we already know.
In short: we don’t want to give the new generation the best sincere arguments against V, because that would be admitting that a reasonable person might believe V. Which seems to get in the way of the claim that V is very, very bad. Which is not only a true claim, but an important thing to claim, because it discourages people from believing V.
But we actually know that a reasonable person might believe V, if they don’t have access to society’s best collective thoughts on it. Because we have a whole history of this happening almost all of the time. On the upside, this does not actually mean that V isn’t very, very bad. Just that your standard non-terrible humans can believe very, very bad things sometimes, as we have seen.
So this all sounds kind of like the error where you refuse to go to the gym because it would mean admitting that you are not already incredibly ripped.
But what is the alternative? Even if losing popular understanding of the reasons for rejecting V is a downside, doesn’t it avoid the worse fate of making V acceptable by engaging people who believe it?
Well, note that the social rules were kind of self-fulfilling. If the norm is that you only argue with people who are a bit wrong, then indeed if you argue with a very wrong person, people will infer that they are only a bit wrong. But if instead we had norms that said you should argue with people who are very wrong, then arguing with someone who was very wrong would not make them look only a bit wrong.
I do think the second norm wouldn’t be that stable. Even if we started out like that, we would probably get pushed to the equilibrium we are in, because for various reasons people are somewhat more likely to argue with people who are only a bit wrong, even before any signaling considerations come into play. Which makes arguing some evidence that you don’t think the person is too wrong. And once it is some evidence, then arguing makes it look a bit more like you think a person might be right. And then the people who loathe to look a bit more like that drop out of the debate, and so it becomes stronger evidence. And so on.
Which is to say, engaging V-believers does not intrinsically make V more acceptable. But society currently interprets it as a message of support for V. There are some weak intrinsic reasons to take this as a signal of support, which get magnified into it being a strong signal.
My weak guess is that this signal could still be overwhelmed by e.g. constructing some stronger reason to doubt that the message is one of support.
For instance, if many people agreed that there were problems with avoiding all serious debate around V, and accepted that it was socially valuable to sometimes make genuine arguments against views that are terrible, then prefacing your engagement with a reference to this motive might go a long way. Because nobody who actually found V plausible would start with ‘Lovely to be here tonight. Please don’t take my engagement as a sign of support or validation—I am actually here because I think Bob’s ideas are some of the least worthy of support and validation in the world, and I try to do the occasional prophylactic ludicrous debate duty. How are we all this evening?’
Thanks for blogging, as always. A related concern that I have noticed is that if you have a norm against knowing the position of past evil people you risk enabling the same position to arise naturally again and again without refuting it while it’s containable. I think that this is actually a very serious problem regarding anti-intellectualism.
So you’ve been watching Steven Pinker?
Treating wrongness as a quantity is at best a poor proxy for refusing to engage with people arguing in bad faith.
It does seem right to refuse to engage with people who don’t seem like they’re trying to process your argument, since that can mislead onlookers into thinking there’s a serious deliberative process going on (so they might anchor on the midpoint of the debate). If someone’s trying to get the right answer, that’s not really a problem, since they should presumably move pretty quickly towards your point of view if you are in fact right (and vice versa).
To give a somewhat more concrete/colorful example, offering a “defense” at a show trial with a predetermined verdict creates the impression that there’s a court trying to figure out what the right answer is, when there isn’t. (Related: it’s quite important that three people were found not guilty at Nuremberg, since is implies can.)
People who are not trying to get the right answer are going to be wronger than average, so these things will be positively correlated.
This interview is a good case study in what it looks like for one side not to be proceeding in good faith. To his credit, Buckley picks up on this with his very first question to Chomsky.
(Chomsky basically does the thing you suggest, though not very articulately.)
To be a bit more specific, the thing that’s bad about onlookers anchoring on “debate” participants’ points of view, is that this only makes sense epistemically if the participants are all trying to get the right answer. If someone is trying to act on you with their so-called beliefs instead of trying to model the world, they’re not a reliable source. The alternative is to process the arguments offered, and apply them to your own beliefs to determine what you think, when the arguments seem valid.
There’s another reason people say “X is not worth engaging with” that is quite different from this from a process perspective, but can look superficially similar: people in power often try to arrange things so that the powerless can’t be heard. Thus, “X is not worth engaging with” isn’t in itself a good or bad sign about someone’s epistemic virtue; one has to look into what’s going on and assess which side (if any) is proceeding in good faith.
Pingback: Rational Feed – deluks917
I don’t think even your suggestion can avoid slipping back into the current equilibrium.
I mean no matter how loudly you say that you aren’t here because you agree with the ideas limitations of time and energy means that no one will engage with the most extreme and absurd conspiracy theories. Getting up and arguing with the people who believe the earth is flat is just too much of a waste of time and useless to convince people.
So while you might successfully signal that you don’t think the ideas you are arguing against are good ideas you still signal that they aren’t the kind of totally absurd ideas that are simply believed out of blind dogmatism that you aren’t bothering to argue against like flat-earthers.
If we lived in a world where people could more readily accept that sometimes people have really bad ideas but are simply mistaken and aren’t part of some dogmatic badly intentioned cult maybe one could save something but that’s an even harder problem we would have to fix first.
The public importance of engaging views which are very wrong or bad seems a matter of not only how wrong the view is but how dire its consequences are. That is, ideas which are wrong or bad in mostly in a descriptive/positive sense don’t appear to receive the most attention. It’s not people espousing the zaniest conspiracy theories about lizard people which receive the most attention, but ideas perceived to be on false premises which seem likeliest to have the most dire normative consequences, like neo-Nazism.
A lot of people also don’t make as much a distinction between the descriptive/positive and normative aspects of the world like this, i.e., between facts and values. Of course this leads to things like a religion perceiving as horrible behaviour or attitudes which doesn’t naturally fit into their worldview but is fine to everyone else.
Related to evaporative cooling in the humanities.
Is this even what’s going on?
Here is a possible reality:
A growing right anti-war movement started happening.
Mainstream right wing politicians are all pro-war.
So the anti-war right needed an alternative name.
Meanwhile, war is profitable, and profits mean you can by PR.
So, if you can make Alt-Right equal Nazi, you can keep the money rolling in for a while longer.
There are enough right and left who want an end to the wars that any sort of real democracy -or even just a republic with representatives that wanted to keep their jobs- would have resulted in an end to playing around in the Middle East. Just like in 2008 where the majority of the people did not want bailouts, but neither party cared.
In this context i’ve seen the following signal: A somewhat rude comment or laugh followed by a haughty but slow escape from the conversation.
By avoiding the conversation, and indicating that the person who thinks the argument is worthwhile is an idiot, the crowd tends toward the person who seems more powerful and confident.
I have a gut feeling that there are other reasons besides those mentioned so far that might cause (well-intentioned, rational) people to attempt to lock some wrong beliefs and belief systems away in the I Won’t Even Dignify That with a Reply closet. These reasons mostly have to do with modeling the wider populace as containing lots of people less rational than oneself. I don’t know how widespread or valid such assumptions are, but here are some overlapping examples of what I have in mind:
• Suppose V is a wrong and harmful viewpoint that follows from simple and intuitively very plausible (but wrong) arguments, and suppose the only really persuasive arguments against V are much more complex and difficult to understand. In that case, influential people who understand the anti-V arguments might decide that it’s safer to try and establish not-V as a sacred dogma and stigmatize the act of debating it, rather than risk the possibility that people who are unintelligent or only weakly engaged with the debate will be persuaded by the pro-V arguments.
• Suppose a society is split between Movementarians and non-Movementarians, and suppose the Movementarians believe that faithful Movementarianism is the only path to heaven, whereas non-Movementarians are condemned to hell for eternity if they don’t convert before they die. Suppose, furthermore, that once upon a time a splinter group, the Radical Movementarians, made the further inference that the act of endorsing any belief inconsistent with Movementarianism, if it turns even one soul away from Movementarianism, thereby does infinitely greater harm than any act of physical violence, and so the Radicals fought a bloody war to suppress all contrary beliefs, but they lost. Now almost all Movementarians belong to the Moderate faction, whose official doctrine is that using violence to suppress competing beliefs is itself grounds for eternal damnation. Most Movementarians dislike violence and find this doctrine agreeable, but those who bother to investigate notice that it rests upon a rather casuistical reading of the holy scripture.
Now suppose you’re a non-Movementarian who wants to avoid another holy war. You could argue against Radical Movementarianism on the grounds that Movementarianism more generally is false, but this has a high risk of backfiring, because few Movementarians will be persuaded, and you will alienate the rest and put them on a defensive footing. On the other hand, you’re reluctant to expound the official Moderate Movementarian doctrine against violence, because you disagree with its premises, and you don’t want to draw attention to weakness of the official arguments for it. In that case, your best move might be to coordinate with Moderate Movementarian thought-leaders to stigmatize serious debate over the validity of the Radical viewpoint.
• Suppose that, once upon a time, the Militant Chauvinistic Party took over the government of Outgroupistan and launched a genocidal war, but they were defeated by the forces of Ingroupistan, and all surviving MCP sympathizers were purged from government and ostracized. Ever since then, it has been part of the Ingroupistani national mythology that the MCP represented the epitome of human evil. Nevertheless, some weirdos on the fringes of Ingroupistani politics openly admire the MCP and say that, instead of reviling them, Ingroupistanis ought to emulate them.
If you were a liberal Ingroupistani public figure, you might argue against the pro-MCP revisionists on the grounds that militancy and chauvinism are bad and lead to genocide, but then the pro-MCPers could use your words to try to win sympathy from conservative Ingroupistanis, by portraying you as a rootless cosmopolitan who denigrates the martial virtue of the Ingroupistani armed forces even as you depend on their sacrifices for your safety. If, on the other hand, you were to just ignore the substance of what the pro-MCPers were saying and wave the bloody shirt, you could hope that conservative Ingroupistanis would side with you out of patriotism, while your fellow liberals would continue to side with you because they are immune to the appeal of MCP ideology by disposition.
All of these rationales for refraining from openly debating ideas could just as easily be deployed in the defense of bad dogmas as well as good ones, and they don’t have any built-in mechanism for self-correction. That alone is reason to enough question them, but if you want to argue the point, you may run into difficulty, because I suspect that anyone who practiced this sort of self-censorship due to a lack of faith in the rationality of their audience would be loathe to admit it openly.
In the last couple years, I’ve seen some interviewers criticized for interviewing people who espouse awful views as if they’re merely interesting personalities, and so only throw them softball questions. So the problem is when you give an awful view a platform without being willing to critically engage it.
Debates with people who are very wrong and debates with people who are only a little wrong could be analogized as strategic differences between enemies and tactical differences between potential allies, respectively. If possible, disagreements should be framed as concerning tactics rather than values. I think a norm attributing ignorance to people holding very bad views rather than malice until it can be shown otherwise is useful. If a common value can be found with the party holding the terrible view, it should be demonstrated how acting on the flawed tactic represented by this view fails to achieve that common value, or does so to a lesser extent than other alternatives. If the terrible view has internal inconsistencies, it should be critiqued on those grounds. If one party refuses to ground their assertions in fact, an argument should be crafted to convince a neutral observer that this is the case. Genuine differences over values might not be resolvable through argumentative means, and the norm against engaging with very wrong people may have resulted from this.