(How do you do that? Do what? Always answer a question with a question, like that.)
by Zachary C. Miller
Literacy in the Information Age - Historical Perspectives on
Information Revolution(s) and the Self
"Marvin Minsky once said something like, 'AI is whatever hasn't been done yet.' First, computers that could do symbolic integration better than humans were the holy grail of AI. Then, we did that. Later, it was, 'If a computer can beat the world champion at Chess, I'll consider it intelligent.' But we've done that too.
"It seems like, since whenever we do something we understand how it was done, that nothing we ever actually do is going to seem as magical as our brains." -Keith Winstein
The title question is a reduction of a classic question which is at the core of Artificial Intelligence (AI) philosophical debates. The more general form of the question is, "Can AI be achieved, and if so how do we know when it hass been achieved?" This question has a different answer for each instance of its asking, different people have different answers, and different times have different answers. It is surprising that so many AI researchers seem to know what they are doing, but so few of them agree about where they are going. But perhaps this is how the frontier always seems, perhaps you can't know where you are going until you get there. Given this ever changing landscape of answers to the fundamental philosophical dilema, I am pretty comfortable offering my own anonymous opinion on the matter.
In every field there is the work and there is the philosophy of the work; it is important for the researcher to understand both of these. AI people have their work cut out for them, "make a computer that does what a human can do." The field progresses ever onward with AI researchers and all the rest of the cognitive scientists (neuroscientists, philosophers, linguists, and psychologists) collaborating to discover what makes people tick and how to make computers tick the same way. When the problem seems unsolved it seems easy to move forwards towards a solution; it is when we arrive at a potential solution that the philosophy comes into play. Once we start to try to verify an answer we also start to get confused about the question. Is this intelligence? What is intelligence? Is this consciousness? What is consciousness? Is consciousness prerequisite for intelligence or vice versa or are they orthogonal qualities? And even the fundamental goal of the work comes into question. Are we really trying to build a replica of the human mind? Or are we trying to build something better? Or are we trying to build an inferior approximation? How human-like does a computer have to be before it can be like a human?
Was Helen a success or a failure? The answer lies in the context in which the quesiton is understood. Was Helen a success of the AI field? Did Helen fulfill her design requirements as specified by the bet? Did Helen advance the researchers understanding of their field? Did Helen acheive awareness/consciousness?
In general I believe Helen was a failure. Superficially she was a failure because she did not pass the test of her design specification. She did not show a human like proficiency for interpretting texts. And more deeply she could not deal with the universe in the way that a human can. The context of human existence is frought with contradiction, contradictions of philosphy, contradiction of behaviors and morals. The world is full of sad stories and evil deeds. The world is full of misunderstandings. The world is full of notions that can not be paraphrased. Helen could not integrate this world into her neural net. Helen could not learn to cope with the impossible. This ability is a defining feature of humanity, we may have created a complex mess but we can live in it; we may have done evil, but we can still do good; and we can dream. If humans behaved as Helen did, there would be no humans.
But, if she was such a failure, why do I keep calling her "she"? Helen did succeed at a number of things. Helen captured the imagination of her creators. Helen forced anyone who observed her (within the book, or reading the book) to think critically about their own consciousness. We saw in the book characters (in the mental hospital) which may have made even less sense than Helen. Could an Alzeihmers patient beat Helen in a Turing test? Did Helen fail the Turing test or did we? Could I beat Helen in a Turing test? If Helen beat someone in a Turing test, would that person be any less human? Is Helen any less conscious than an Alzeimers patient? These are hard questions that get at the core of our compasion and our notion of humanity. I do not have the answers.
Helen furthered the philosophy (both real and virtual) by giving us something to think about, by giving us more questions to ask. Even if these questions have already been asked in other forms, even if these questions are all reductions the one question of AI, they are all still important. Each question caries with it an observation and a context and it is on observation and context that all of science is based.