A problem with listening to arguments is that they often fail to include the evidence that provoked them, which can be informative where the argument itself is fatally flawed.
For instance, suppose there is a God. And suppose that people frequently see him, and so feel inclined to believe in him. However they know ‘I saw God!’ will get little interest and much criticism, so they don’t say that. But, feeling more positively inclined toward pro-God arguments, and end up tentatively agreeing with some of them. They come to say ‘how did eyes evolve?’ and ‘where did the universe come from?’, because these are the most compelling-to-them pro-God arguments they came across. And so you—who has never seen God—just see a whole lot of people making bad arguments about God, and then weirdly believing them. But the important evidence—that a large portion of the population has experienced personally meeting God—is hidden from you, though in sum you might have taken it more seriously than you take a flawed argument.
If people feel that arguments are more virtuous than anecdote, you should remember that when people make arguments, they might be doing it in the place of anecdotes, that a) actually changed their mind and b) are actually interesting evidence.
This is especially true in a world where most people can’t argue their way out of a paper bag, and are also more frequently compelled by non-argument phenomena than arguments.
So, an upshot is that if someone makes an argument to you, consider asking the story of how they came to feel disposed toward its conclusion.
A real example:
I remember motivatedly reasoning in the past, and while I expect my arguments were above average, had someone wondered what produced them and asked me, I might have told them that I had been in the forests, and that they were incredible and made me feel different to being in other places, and that I am further offended by people getting their way and destroying value in the name of bad reasoning, and that I had always been basically on the environmentalist side, because everyone I knew said that it was wrong. And even if my arguments had been of no interest to someone, they could infer from my story that the forests were probably amazing to experience, and that local environmental politics was polarized (that the other side seemed frustratingly wrong could probably be guessed). Either of which is evidence on whether the forests should be destroyed, and probably not mentioned a lot in my arguments.
A real possible example where this might help:
Perhaps people sometimes argue that AI will undergo a fast take-off, because “once there is a new feedback loop, who knows how fast it will go?” And you do not find this argument compelling—after all, there are new feedback loops all the time, and they rarely destroy the world. But what caused them to think AI will undergo a fast take-off? One possibility is that their intuitions are taking in many other things about the world, and producing expectations of fast take-off for reasons that the person does not have conscious ability to explain. If so, that would be interesting to know, regardless of the quality of their arguments. Another possibility is that they heard someone argue really compellingly, and they can’t remember the precise argument. Or they trust someone else who claimed it. These might be informative or not, depending on why the person seemed worth listening to.
It seems to me that the FOOM argument is a bad example here. I’m very confident that is motivated by the argument, not by other evidence, and the anti-foom side is coming from (irrelevant) experience with feedback loops.
This is often true in general. You can pretty reasonably guess that the strongest arguments for a position are stronger than the reasons that you hear. But that is likely true on both sides of the issue.
We will get more evidence on whether the other evidence for foom is likely to hold up by seeing whether those other intuitions are holding up as time goes on. For example, Eliezer stated in advance that AlphaGo was going to be either much better than Lee Sedol or much worse than him, not in some intermediate position, based on some of the same intuitions that lead him to foom. He was mistaken.
So true! I would add that people often sense that arguments are really motivated by something else, ask about that, but dismiss the real motivation is not itself a piece of evidence. I think that’s a wrong assumption. Like, if someone’s arguing that Trump’s rhetoric is harmful because it inspires hate crimes, but they don’t really have any direct evidence of that, and someone responds, “You’re only worried about hate crimes because you’re Jewish.” Sure, that person might have an inflated idea of the likelihood of hate crimes or be imagining that they’ll escalate to holocausts far more often than they actually do, but the reality of Jewish fears of persecution and the holocaust is pretty relevant to the issue at hand. I like the way you’ve spoken about it here– the real anecdote or unacknowledged belief that led to the argument presented isn’t necessarily out of the realm of the debate.
I’ve tried to do this when discussing AI timelines; I went around telling a few people that my timelines got shorter over the last year or two, and when they asked why, I said “causally, mostly experiencing AlphaGo and talking to Critch a lot.”
Pingback: Reading List for the Week ending May 12, 2017 – Reading Diet
@michaelvassar I actually feel that the “seeing god” FOOM is surprisingly on the nose! But maybe it’s just because I saw the devil, and she told me that strong AI wouldn’t happen in our lifetimes.