Relatively minor technological change can move the balance of power between values that already fight within each human. Beeminder empowers a person’s explicit, considered values over their visceral urges, in much the same way that the development of better sling shots empowers one tribe over another.
In the arms race of slingshots, the other tribe may soon develop their own weaponry. In the spontaneous urges vs. explicit values conflict though, I think technology should generally tend to push in one direction. I’m not completely sure which direction that is however.
At first glance, it seems to me that explicit values will tend to have a much better weapons research program. This is because they have the ear of explicit reasoning, which is fairly central to conscious research efforts. It seems hard to intentionally optimize something without admitting at some point in the process that you want it.
When I want to better achieve my explicit goal of eating healthy and cheap food for instance, I can sit down and come up with novel ways to achieve this. Sometimes such schemes even involving trickery of the parts of myself that don’t agree with this goal, so divorced they are from this process. When I want to fulfill my urge to eat cookie dough on the other hand, I less commonly deal with this by strategizing to make cookie dough easier to eat in the future, or to trick other parts of myself into thinking eating cookie dough is a prudent plan.
However this is probably at least partly due to the cookie dough eating values being shortsighted. I’m having trouble thinking of longer term values I have that aren’t explicit on which to test this theory, or at least having trouble admitting to them. This is not very surprising; if they are not explicit, presumably I’m either unaware of them or don’t endorse them.
This model in which explicit values win out could be doubted for other reasons. Perhaps it’s pretty easy to determine unconsciously that you want to live in another suburb because someone you like lives there, and then after you have justified it by saying it will be good for your commute, then all the logistics that you need to be conscious for can still be carried out. In this case it’s easy to almost-optimize something consciously without admitting that you want it. Maybe most cases are like this.
Also note that this model seems to be in conflict with the model of human reasoning as basically involving implicit urges followed up by rationalization. And sometimes at least, my explicit reasoning does seem to find innovative ways to fulfill my spontaneous urges. For instance, it suggests that if I do some more work, then I should be able to eat some cookie dough. One might frame this as conscious reasoning merely manipulating laziness and gluttony to get a better deal for my explicit values. But then rationalization would say that. I think this is ambiguous in practice.
Robin Hanson responds to my question by saying there are not even two sets of values here to conflict, but rather one which sometimes pretends to be another. I think it’s not obvious how that is different, if pretending involves a lot of carrying out what an agent with those values would do.
An important consideration is that a lot of innovation is done by people other than those using it. Even if explicit reasoning helps a lot with innovation, other people’s explicit reasoning may side with your inchoate hankerings. So a big question is whether it’s easier to sell weaponry to implicit or explicit values. On this I’m not sure. Self-improvement products seem relatively popular, and to be sold directly to people more often than any kind of products explicitly designed to e.g. weaken willpower. However products that weaken willpower without an explicit mandate are perhaps more common. Also much R&D for helping people reduce their self-control is sponsored by other organizations, e.g. sellers of sugar in various guises, and never actually sold directly to the customer (they just get the sugar).
I’d weakly guess that explicit values will win the war. I expect future people to have better self-control, and do more what they say they want to do. However this is partly because of other distinctions that implicit and explicit values tend to go along with; e.g. farsighted vs. not. It doesn’t seem that implausible that implicit urges really wear the pants in directing innovation.
Pingback: Overcoming Bias : Testing A Tech-Helps-Idealism Hypothesis
I’m curious about how you arrive at this kind of two-tailed intuition. (It seems this would leave you thinking the effects will probably be balanced or small.)
But to respond to your question: Since our implicit wants are more important for quotidian decisions, I would think that, when technology is developed for the market, technology should primarily serve implicit wants (urges) . What technology most altered the balance between urge and ideal? Surely birth control had more effect (for urge) than beeminder (for ideal).
(But I’m not betting that capitalism will last long, even if humanity does.)
An implicit urge to reproduce as prolifically as the food supply will support, and with a diverse set of people, is shaped by selection pressures that act over a time frame most explicit plans can’t accommodate. It’s entirely blind to the future, but offers a window (of sorts) deep into the past, so it’s both “short-sighted” and not.