If we understand it, it’s not intelligence

I’ve recently discovered the papers of Drew McDermott, one of the great old AI researchers, who happens to be one of two people in a 500 km radius who is interested in automated planning and therefore a hypothetical, potential collaborator. Next to technical papers, he has written some very opinionated and interesting papers on AI as a whole, including such famous papers as “Artificial Intelligence Meets Natural Stupidity” (written just after he got his PhD, no less).

An interesting piece on Deep Blue got me thinking. In this article, McDermott refutes the common argument that Deep Blue is not intelligent because it “only searches through millions of possible move sequences ‘blindly.'” Perhaps the real point is that because some people understand, and many have a general idea of, what Deep Blue actually does, it can’t possibly be intelligent: Intelligence is some kind of mysterious thing that happens in our heads while we’re not looking. Once we know how it works, it just becomes another man-made algorithm; the mystery is gone, hence there is no intelligence. In fact, one reason why neural nets seem so appealing as a model of intelligent systems might just be that they function as black boxes, whose inner workings nobody except the learning algorithm that trains them really understands, and therefore still retain the potential for being “intelligent”.

It’s a bit reminiscent of Richard Feynman’s definition of a trivial theorem in mathematics as “one that has already been proved”. Once you’ve figured out how to do it, it seems easy, and everybody could have done it.

0 Responses to “If we understand it, it’s not intelligence”



  1. Leave a Comment

Leave a comment




Pages

September 2006
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

RSS Unknown Feed

  • An error has occurred; the feed is probably down. Try again later.