People are working on making robot cars communicate, with pedestrians for instance.
Notice that the apparent benefit of having cars communicate with pedestrians doesn’t actually have much to do with robots driving the cars. If having cars signal to pedestrians is useful, probably so is having drivers signal to pedestrians. Yet current cars and driving norms hardly provide for this at all. Many a time I have thought about this when trying to cross a road when there is a car coming toward me that seems to be slowing down, kind of, and whose windscreen I can’t really see through. Is the driver waving to me? Eating a sandwich? Hard to tell, so I won’t take my chances. Ah, now he’s stopped. And he’s annoyed. Or swatting a fly. Does that mean he’s about to go? Hard to tell, maybe I’ll just wait a sec to be sure. Now he’s really annoyed – annoyed enough to give up and drive on?… If only there were some little signal that meant ‘while this signal is on, I see you and am stopping for you’.
This is not my real point, but an example. Thinking about a strange future of robot cars causes us to make predictions and envision potentially valuable additions to it that have little to do with robot cars. Similarly, thinking about future AI development causes people to wonder if sudden leaps in technological capacity could cause a small portion of humanity to get far ahead of the rest, or if human values might be lost in the long run. These issues are not specific to AI. Yet when we look at the world around us we seem less likely to see ways to improve it, or to wonder why no groups of humans do get ahead of the rest technologically, or even notice that technological changes tend to be relatively small, or to ask what is becoming of our values.
In general it seems that thinking about strange scenarios causes people expect things to happen which have little to do with the scenarios. Since they have little to do with the scenarios, it makes sense to ask why they haven’t already happened, or whether we could already benefit from them.
Some men see things as they are and say, why? I dream of things the way they never were and say, why not?
– Robert F. Kennedy, after George Bernard Shaw
Dreaming of the way things never were seems more impressive, difficult, and useful. Perhaps thinking of strange scenarios is one way to do it more easily.
Haven’t you pretty much described large portions of academia, especially fields such as philosophy, string theory, algebraic geometry and so on, all whose goal is to pretty much think about very interesting— but at the end of the day, very unrealistic— scenarios, that could have potentially useful spillovers into other areas of thought.
Someone in the comments to a post on self-driving cars on Marginal Revolution suggested the self-driving cars might have seatbelts that you have to buckle or the car won’t start. Which is an awesome idea and there is no reason this shouldn’t have already been implemented in driverful cars. Its also an idea that really has nothing to do with driverless cars but seems to have come out of a total rethink of driving itself.
I hadn’t considered communication with pedestrians before. The other related things would be for robot cars to communicate where pedestrians are when they detect them. These could then appear on the “map” that other robot cars in the area have available to them. Essentially other cars would act as “remote sensors” for my robot car, improving the accuracy of its model of the world it is driving through.
Could this be done in human driven cars? Yes! From “dumb” cars to “smart” cars to robot-driven cars is a continuum of technologies. Any sensing and detection and communication a robot car finds useful for modifying its map of the world will also be useful to inform a human driver of what is around him.