‘The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’ – that, somehow, is so much harder!”‘
– Nick Bostrom, Superintelligence, p14
There are some activities we think of as involving substantial thinking that we haven’t tried to automate much, presumably because they require some of the ‘not thinking’ skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting successful companies. If we had successfully automated the ‘without thinking’ tasks like vision and common sense, do you think these remaining kinds of thinking tasks would come easily to AI – like chess in a new domain – or be hard like the ‘without thinking’ tasks?
Sebastian Hagen points out that we haven’t automated math, programming, or debugging, and these seem much like research and don’t require complicated interfacing with the world at least.
Crossposted from Superintelligence Reading Group.
AGI research is “like seeing”: narrow AI has gotten increasingly good at doing more-or-less any specific task you set it to, while AGI advances comparatively slowly because it’s a scientifically harder problem with much lower marginal value.
We do automate math and programming.
Automated math comes in the form of theorem provers (e.g.,Coq,Isabelle). The first task is to automate checking the proofs. Finding proofs is mainly a brute force job. Finding good theorems is hard to automate, because we do not know how to do that.
Automated programming is frameworks and libraries. Automating design is the hard part, where you take fuzzy human requirements and turn them into precise and consistent orders. This is mostly a communication job. There could be a chatbot which generates UML diagrams for you. Things like this have not been tackled I think.
Debugging is not a well defined thing, so what would automating it look like?