Artificial What?
WHEN I WAS EIGHT YEARS OLD, men walked on the moon. It was the biggest anticlimax of my life. To someone raised on “Star Trek” and “Lost in Space,” it was a huge disappointment to learn that we had just gotten around to something that seemed so basic. I was reminded of this recently when a computer beat Carry Kasparov at chess. Grownups professed shock, but I’ll bet a lot of eightyear-old Webheads were surprised that such an elementary task had taken so long.
The computer’s victory brought renewed attention to the field of artificial intelligence (AI), especially to the question of whether computers are capable of thought. Just about every book on AI has a chapter called something like “Can Machines Think?” Philosophers, computer scientists, biologists, psychologists, theologians, and linguists have all offered their opinions. I cannot claim expertise in any of these fields, but I do make my living with words. And since the usage of a word is at issue here, I feel qualified to offer my opinion, which, briefly stated, is: Who cares?
Most of us are familiar with Alan Turing’s test, which says, in effect, that a computer is thinking if it can imitate a person as convincingly as RuPaul imitates a woman. In fact, Turing never meant “the imitation game,” as he called it, to be a test. It was more like those “You Will” commercials for AT&T—a preview of what computers might someday be capable of, at a time (1950) when they were about as powerful as today’s pocket calculators. He dismissed the question “Can machines think?” as “too meaningless to deserve discussion.” Still, AI is never mentioned today without dragging Turing in, and because of this unbreakable custom, his so-called test has dominated the “can computers think” question ever since.
The problem with defining thought is that the word has many meanings, most of them ineffable. If thinking is just manipulating data and reacting, then any vending machine can think, and so what? Whereas if thinking is a biological function, then a machine can no more think than it can urinate—and again, so what?
In fact, that might make a better test, since most of us routinely go longer without thinking than we do without urinating. And you probably could design a machine that would mimic this essential human function by excreting a liquid of the proper temperature and composition at appropriate intervals.
But you wouldn’t want to.
Similarly, clever AI researchers might build a computer that would lie, cheat, become bored, make mistakes, jump to conclusions, and get a crush on the new administrative assistant. They might even build one that would spend all its time quibbling with other computers about what thinking is. But again, what’s the point? That would be artificial stupidity, and we have more than enough of the natural kind.
In concentrating so much on mimicry, each side in the debate concedes the other’s point. By insisting so vehemently that a computer can do anything a person can, AI visionaries are letting human behavior define thinking. Otherwise why would you bother training a machine to act silly or write mediocre fake Mozart? Similarly, when skeptics devise clever ways of outsmarting computers or cheating on the Turing test, they acknowledge the validity of equating human and machine thought. Since no one would race a person against a car, pitting the human brain against a computer is an admission that they belong in the same class.
Philosophers, like lawyers, get paid to make work for one another. That’s why they keep coming up with new definitions of intelligence, carefully crafted to include or exclude computers, according to their preconceptions. For the rest of us the important questions are: What are computers good at, what are humans good at, and how can they work together? To insist, as one recent work does, that “a precise answer to the question ‘What do we mean by intelligence?’ is one of the most important of the goals that artificial intelligence researchers should be striving to attain” could not be more wrong. Learning to ignore that question is the most important goal. We no more need a definition of intelligence to build smart machines than we need a definition of time to build a watch, and this obsession with lexicography is nothing more than a self-imposed red herring.