(Appeared in Washington Square News.)
February 2011. The news were starting to report protest movements in the Arab World that would eventually be known as the Arab Spring. Right after the 16th of that month, though, some headlines were reserved for the cognitive achievements of a certain character named Watson. Long story short, Watson got 1 million dollars as the first prize of a trivia-type televised game called “Jeopardy!”. While getting such money is usually big news from the winner’s point of view, almost nobody else would have cared if it wasn’t for the fact that Watson was a cluster of 2,880 computer processors with 16 Terabytes of RAM built by IBM.
That is by all means impressive. But as hardware and software quietly and consistently evolve, even such events are ceasing to drive attention. Not long ago, a computer outperformed a highly skilled human in playing Chess. At some point before, a machine computed an approximation of PI more accurate than any brain-bearing creature would be able to obtain in a lifetime. Examples of machines outperforming humans in certain cognitive tasks abound, but can it be said that these machines are intelligent? Well, they are definitely able to apply and acquire knowledge and skills (the definition I grab from my Mac’s dictionary), but let’s bring a more sophisticated view to the table.
In 1950, Alan Turing proposed a test in which the machine should be able to exhibit intelligent behavior as to make it indistinguishable from a actual human. The Turing Test, as it is now known, became a standard for AI in computer science circles. But, as it turns out, that’s not a very reassuring definition. You can easily imagine a setting where, with the proper interface, most people wouldn’t be able to tell if they are playing chess or Jeopardy! against a human or a computer. Besides, computer systems are able to display such level of intelligence in more artistic tasks. For example, researchers at the Sony Computer Science Lab in Paris recently developed an algorithm for Jazz music composition that passed the Turing Test.
That particular experiment made me feel more comfortable with my opinion about Jazz music, but before you hate me, consider this: eventually, machines will probably be able to pass the Turing Test in any cognitive or creative task you can imagine (if, or course, we manage to keep existing as a society that supports scientific research for long enough). Besides, if there have been no news that a software passed the Turing Test for your favorite style of music, that’s probably because there haven’t been people who bothered working on it yet.
Building a drone that resembles and acts like a human (as in that Spielberg’s AI movie) seems much more difficult, but at least in principle (that is, in the form of a mathematical theorem), there’s nothing preventing them from being implemented. If they will ever exist, and how we’ll react to them, it’s time’s burden to answer, and sci-fi authors’ task to imagine. But, so far, there’s little to be said against their inevitability. Perhaps the Watson of the future, after winning a championship, will go out with some friends to celebrate over drinks, and even brag about it in its favorite online social network.
Algorithms and machines that outperform humans in increasingly sophisticated tasks keep appearing. We will know about them, get impressed that we were able to build them, and get mad that some of them ended up stealing our jobs. Then we’ll move on with our lives, eventually getting grumpy that they are not working properly, the same way we get grumpy at people for futilities.
So, yes, it’s fine to call these machines and algorithms intelligent when they pass the criteria of intelligence that we set for them. In fact, it’s important not to forget what they are able to achieve and how they improve our lives. On the other hand, they are constantly pushing the threshold for an activity, or attitude, to be called intelligent, which in turn pushes the limits for what we can do as human beings. That reminds us to be careful and save the adjective for when it is truly appropriate.
Finally, to say that something is intelligent is less risky than saying that it “knows,” it “feels,” it “thinks,” or it is “conscious.” These are deeper philosophical mazes often, and improperly, included in the AI debate. They are the ones who really cause controversy. I’ll conveniently note that this text is approaching the size limit and skip those topics, thus preventing you from getting self-conscious and having existential thoughts. Also, rest assured that this was not written by an AI system designed to convince readers that they are computers. (Subtly smiling.)