Tag Archives: ai

Impressive, Autopilot, But No Thanks

It wasn’t too long after exercise bicycles were invented that people started noticing the irony of those who drive to the gym to run on a stationary bike. In modern days of automation and mindful meditation, the same folly applies to the office worker that gets stressed developing an algorithm to execute a boring task and save some time, only to later spend time mindfully executing some boring task to relax. By current expectations on the future of mobility, soon that office worker will automate a boring task in a car that self-drives to the gym, run on a stationary bike, then perform another boring task mindfully to relax, while the car self-drives back home.

Having a car that drives itself seems at first a good idea. Perhaps we could use the time to learn a new language, work on our startup, catch up with friends on social media, or take a nap. But here I want to argue for the default option: that we should be driving our cars, even if not required to.

To be fair, it’s not easy. On the practical side, traffic is chaotic, other drivers are aggressive, roads are badly maintained, and the commute is long. In terms of productivity and time management, given the pressures and anxieties of work, indeed often times we could use elsewhere the minutes we spend driving. A third major factor impacting the experience on the road is design: most popular cars are simply boring to drive — the few people who appear to be excited about their automobiles are those who can afford sleek, sporty models from a handful of Italian manufacturers.

Now Silicon Valley is determined to solve the problem of getting us some extra time, via automation, and perhaps by following Steve Jobs’ mantra that consumers don’t know what they want until it’s shown to them, it seems we collective agreed that self-driving is really a goal we should be spending a lot of brain power on right now — as if it would somehow solve one of the most pressing issues of our existence.

It will not.

While it would certainly be better to have the option of not driving, resources are scarce, and we do have to chose priorities. Just to stay on the same industry, how about bringing the development of engines based on renewable energy to the top of the agenda instead? Tesla should be making headlines for its electric engine, not for its autopilot software.

Entrepreneurs know that consumers are much more susceptible to arguments that benefit them personally (the ideal safety of an autopilot) than something vague and intangible (the environment). There are, however, major psychological benefits to be gained by driving, which we can tap into with just a bit of mindfulness, and self-control over road rage.

First, recognition and contemplation of our good luck. If we zoom out a little on the line of history, we see the miracle of science and engineering that is involved in pushing us across the surface of the Earth faster than any other human has done up until a mere century ago. While we still have to keep our hands at a circular interface in order to tell the machine exactly where to go, our ancestors had to pull ropes attached to various domesticated animals — none of which had air conditioning, cup holders, or GPS.

Second, if in the name of productivity we eliminate every possible chunk of idle time, when will we give opportunity to epiphanies to happen? The so called “shower thoughts” — which occur when we are relaxed, performing some rather boring task almost automatically— are some of the best sources of good ideas, provided we serendipitously pay attention. Obviously, one cannot be too relaxed on the road, but even this is good: the need to keep constant focus on a methodical task is a great way to tame a restless mind — precisely what mindfulness advocates argue for.

Third, in the chaos of modern life, when often we are made aware of how hard it is to have things happening as we’d like them to, the sense of control over a machine much heavier, stronger than us, brings a sense of power and confidence that we rarely encounter.

We do need cars that guard us from eventual mistakes, and certain people — for example, the elderly and disabled — would potentially benefit from full automation. But Silicon Valley and the auto industry are trying to tell us with a certain imperative attitude that we all need self-driving cars. We really don’t. What we need instead is to rethink driving, to be conscious of, and leverage the numerous therapeutic opportunities that it provides.

On Singularities

A computer, once again, outperformed a human in a highly specific task, this time around the game of Go, using, in part, a recent (well, not that recent, but also not “traditional”) AI technique known as Deep Learning.

The media, once again, made a splash, and some critics were quick to dismiss the feat by pointing to the limitations of Deep Learning (the wining algorithm also used more traditional AI methods).

Of course, the people on the forefront of Deep Learning know better than anyone about its limitations — they’re simply more faithful in it than others. Deep down (no pun intended), they probably don’t like such splashy news either, because it raises expectations, but we all understand the importance of advertisement (we live in a social, political world).

If history is of any guidance, the current hype will pass, as have many other AI hypes. It is not impossible that general AI will happen. Singularities do happen: this universe, self-replication, self-consciousness. But they seem to occur only every billion years or so.

Hence, in the big scheme of things, the last singularity happened just “yesterday,” and we will have to wait a whole lot for the next. Current AI progresses are admirable, and important, but as a society, we have to learn to look at them for what they really are: incremental steps.

Related: On the Higgs Boson Hysteria

A.I. Paranoia in the News

What’s the difference between the following sentences? (a) Robot Kills Man; (b) Man Killed by Robot. In this case it’s not simply a question of active versus passive voice. It also implies intention, which, despite the current hype, robots do not possess. According to many media outlets, however, the death of a worker in an incident with a factory robot earlier this month apparently wasn’t sad enough to break into news, and they had to insert some elements of evil Artificial Intelligence to announce the story. The list below is a sample of how different news websites titled the corresponding piece.

Fortune: “Worker killed by robot at VW plant”

Ars Technica: “Man killed by a factory robot in Germany”

The Independent: “Worker killed by robot at Volkswagen car factory”

BBC: “Man crushed to death by robot at car factory”

Gawker: “Worker Crushed to Death by Robot in Volkswagen Plant”

Geek.com: “Volkswagen robot factory worker takes a human life”

NBC News: “Robot Crushes Contractor to Death at VW Motor Plant in Germany”

NY Post: “Robot kills man at Volkswagen plant”

USA Today: “Robot hits, kills Volkswagen worker”

The Guardian: “Robot kills worker at Volkswagen plant in Germany”

Mashable: “Robot kills man at Volkswagen plant in Germany”

Business Insider: “A robot killed a factory worker in Germany”

CNN: “Car assembly line robot kills worker in Germany”

Time: “Robot Kills Man at Volkswagen Plant”

Telegraph: “Robot kills man at Volkswagen plant in Germany”

NY Times: “Robot Kills Man at Volkswagen Plant in Germany”

The Register: “Rise of the Machines: ROBOT KILLS MAN at Volkswagen plant”

Breitbart: “AS FEARS OVER ARTIFICIAL INTELLIGENCE GROW, A ROBOT KILLS A WORKER IN A GERMAN CAR FACTORY”

The State and Future of Artificial Intelligence, Part 2

(Appeared in Washington Square News.)

February 2011. The news were starting to report protest movements in the Arab World that would eventually be known as the Arab Spring. Right after the 16th of that month, though, some headlines were reserved for the cognitive achievements of a certain character named Watson. Long story short, Watson got 1 million dollars as the first prize of a trivia-type televised game called “Jeopardy!”. While getting such money is usually big news from the winner’s point of view, almost nobody else would have cared if it wasn’t for the fact that Watson was a cluster of 2,880 computer processors with 16 Terabytes of RAM built by IBM.

That is by all means impressive. But as hardware and software quietly and consistently evolve, even such events are ceasing to drive attention. Not long ago, a computer outperformed a highly skilled human in playing Chess. At some point before, a machine computed an approximation of PI more accurate than any brain-bearing creature would be able to obtain in a lifetime. Examples of machines outperforming humans in certain cognitive tasks abound, but can it be said that these machines are intelligent? Well, they are definitely able to apply and acquire knowledge and skills (the definition I grab from my Mac’s dictionary), but let’s bring a more sophisticated view to the table.

In 1950, Alan Turing proposed a test in which the machine should be able to exhibit intelligent behavior as to make it indistinguishable from a actual human. The Turing Test, as it is now known, became a standard for AI in computer science circles. But, as it turns out, that’s not a very reassuring definition. You can easily imagine a setting where, with the proper interface, most people wouldn’t be able to tell if they are playing chess or Jeopardy! against a human or a computer. Besides, computer systems are able to display such level of intelligence in more artistic tasks. For example, researchers at the Sony Computer Science Lab in Paris recently developed an algorithm for Jazz music composition that passed the Turing Test.

That particular experiment made me feel more comfortable with my opinion about Jazz music, but before you hate me, consider this: eventually, machines will probably be able to pass the Turing Test in any cognitive or creative task you can imagine (if, or course, we manage to keep existing as a society that supports scientific research for long enough). Besides, if there have been no news that a software passed the Turing Test for your favorite style of music, that’s probably because there haven’t been people who bothered working on it yet.

Building a drone that resembles and acts like a human (as in that Spielberg’s AI movie) seems much more difficult, but at least in principle (that is, in the form of a mathematical theorem), there’s nothing preventing them from being implemented. If they will ever exist, and how we’ll react to them, it’s time’s burden to answer, and sci-fi authors’ task to imagine. But, so far, there’s little to be said against their inevitability. Perhaps the Watson of the future, after winning a championship, will go out with some friends to celebrate over drinks, and even brag about it in its favorite online social network.

Algorithms and machines that outperform humans in increasingly sophisticated tasks keep appearing. We will know about them, get impressed that we were able to build them, and get mad that some of them ended up stealing our jobs. Then we’ll move on with our lives, eventually getting grumpy that they are not working properly, the same way we get grumpy at people for futilities.

So, yes, it’s fine to call these machines and algorithms intelligent when they pass the criteria of intelligence that we set for them. In fact, it’s important not to forget what they are able to achieve and how they improve our lives. On the other hand, they are constantly pushing the threshold for an activity, or attitude, to be called intelligent, which in turn pushes the limits for what we can do as human beings. That reminds us to be careful and save the adjective for when it is truly appropriate.

Finally, to say that something is intelligent is less risky than saying that it “knows,” it “feels,” it “thinks,” or it is “conscious.” These are deeper philosophical mazes often, and improperly, included in the AI debate. They are the ones who really cause controversy. I’ll conveniently note that this text is approaching the size limit and skip those topics, thus preventing you from getting self-conscious and having existential thoughts. Also, rest assured that this was not written by an AI system designed to convince readers that they are computers. (Subtly smiling.)

The State and Future of Artificial Intelligence, Part 1

(Appeared in Washington Square News.)

It’s an early morning of the just-arrived winter. People I can see on the street from my window wear heavy coats, but it’s unclear how cold it is. I can open the window and let my built-in skin sensors grab an approximate measurement, but I realize a much more accurate value can be obtained by pressing a button. “Siri, what’s the temperature outside?,” I ask, with a brazilian accent most americans I talk with think is russian. “Brr! It’s 32 degrees outside,” answers the piece of rectangular glass I hold. It’s a female voice, with an accent of her own. Artificial. That’s probably how I’d describe it.

The application, acronym for Speech Integration and Recognition Interface, encountered a wave of sarcastic, philosophical, flirtatious, and mundane questions, since it was made natively available to certain iOS devices in October 2011. Countless jokes featuring Siri made their way through the nodes of the social-media graph, and books about her witty personality have been printed. But if you could take Siri in a time trip back to when your grandmother was 10 (fear not, the time travel paradox involves your grandfather), she would definitely fulfill Clarke’s third law and qualify your talking device as “magic”. Perhaps she would even call Siri “intelligent.”

We’ll skip the fact that Siri is intelligent, indeed, according to the definition she grabs from Wolfram Alpha when asked, for there’s no consensus about what it means to be intelligent (nor for what “meaning” means, as a matter of fact; but enough about metalinguistics). In the following, I’ll put Siri in context with recent developments in artificial intelligence. But first, come back from your time travel and book a trip into your brain. This one is easier: simply think of your grandmother.

By doing so, a specific area in the back of your brain, responsible for face recognition, activates. Moreover, it has been conjectured that a single neuron “fires” when you think of her. It’s the “grandmother neuron” and, as the hypothesis goes, there’s one for any particular object you are able to identify. While the existence of such particular neurons is just a conjecture, at least two things about the architecture of the visual cortex have been figured. One, functional specialization: there are areas designated to recognize specific categories of objects (such as faces and places). Two, hierarchical processing: visual information is analyzed in layers, with the level of abstraction increasing as the signal travels deeper in the architecture.

Computational implementations of so-called “deep learning” algorithms have been around for decades, but they were usually outperformed by “shallow” architectures (architectures with only one or two layers). Since 2006, new techniques have been discovered for training deep architectures, and substantial improvements happened in algorithms aimed to tasks that are easily performed by humans, like recognizing objects, voice, faces, and handwritten digits.

Deep learning is used in Siri for speech recognition, and in Google’s Street View for the identification of specific addresses. In 2011, an implementation running in 1,000 computers was able to identify objects from 20,000 different categories with record (despite poor) accuracy. In 2012, a deep learning algorithm won a competition for designing drug agents using a database of molecules and their chemical structures.

These achievements relighted public interest in Artificial Intelligence (well, not as public as in vampire literature, but definitely among computer scientists). Now, while the improvements are substantial, specially when compared to what happened (or didn’t happen) in previous decades, AI still remains in the future. As professor Andrew Ng pointed out, “my gut feeling is that we still don’t quite have the right algorithm yet.”

The reason is that these algorithms are still, in general, severely outperformed by humans. You can recognize your grandmother, for instance, just with a side look, by the way she walks. Computers can barely detect and recognize frontal faces. Similarly for recognizing songs, identifying objects in pictures and movies, and a whole range of other tasks.

Wether comparison with human performance is a good criterion for intelligence is debatable. But I’ll leave that discussion for part 2.