Monthly Archives: January 2013

There Are Some Good Educative Channels on YouTube, But There Should Be More

YouTube

An interesting set of smart, well produced, educative channels have emerged on YouTube in the last couple of years. Though YouTube, in general, degenerated to something resembling television, some educational series are really worth the extra time spent watching ads.

That there are internet based educative programs is no news. Universities have been offering online courses for many years now, iTunes has a whole section dedicated to educative videos (iTunes U), Massive Open Online Course websites are starting to pop-up (see Coursera and Udacity, for instance), and of course there’s TED. But in what concerns YouTube, a user-generated database where the popularity of videos is guided more by a survival-of-the-fittest rule than by an intelligent-design principle, it took a while for educative content to get notoriety.

But it eventually happened. At the top of the list, Crash Course, a channel created by two brothers who themselves evolved from authors of futile content, and MinutePhysics, a series of short lectures on physics that can be fairly described as the video version of XKCD with more modest qualifications in the comic area.

Crash Course is run by John and Hank Green, who became YouTube sensations by making a channel out of video messages to each other during a year (the Brotherhood 2.0 project). Crash Course consists of a series of 10-ish minutes long lessons, at first in World History (by John) and Biology (by Hank). Those courses finished and the brothers now teach English Literature and Ecology, respectively. The subjects are somehow irrelevant, though. What makes them worth watching are mainly the writing and performing skills of the guys, specially John, who often engages in clever jokes and interesting self-conscious footnotes about society and the process of reaching adulthood. The dialogs between him and his younger version, a student he refers to as “me from the past,” are just brilliant. Take a look at the first episode of the World History series, for instance.

MinutePhysics, by Henry Reich, deserves a bit more credit for the content. The channel features short videos, with hand-drawn frames, about physics. Topics range from the popular Shrodinger’s Cat thought experiment to an open letter to President Obama about the outdated high-school physics curriculum in the U.S. Though Reich is certainly not the first specialist that steps down to the layperson level to teach advanced physics, he is the first to successfully approach the YouTube format, at least if you measure by the number of subscribers.

As of late January 2013, MinutePhysics had about 950 thousand subscribers, ranking 170 among all channels, according to that criterium. In comparison, Apple’s channel is at position 176. Crash Course appeared in the 484th position, with about 450,000 subscribers, passing, for instance, CBS’s channel (534th).

There are of course a few other good examples of high-quality educative channels, such as Sixty Symbols (for the physics aficionado), and OULearn (for a sample, watch their “60 seconds adventures” series in economics and thought). But this is not meant to be a complete list. The point is that, thankfully, it’s not necessary to “follow the crowd” and explore the ridiculous, the laughable, the sex appealing, the feline, or the equine (I’m looking at you, Gangnam Style and your parodies), to become popular on YouTube.

What makes video lessons interesting is obvious: due to the multi-media nature of the format (audio, text, images, animations), more complex information can be transmitted, generally faster than in any other “unidirectional” way. Yet, most educators still chose writing a book when the thought of sharing knowledge comes to mind. Perhaps that’s the right choice in certain areas (such as math and philosophy) where some reflexion should happen after information is acquired, or maybe authors don’t have the skills to produce video content of good quality. However, considering that tools for video production are widely available (most laptops these days come with cameras and Movie editors), and that video sharing online is easy and free, video is definitely underused as a media for knowledge transmission, specially in times where time is so scarce.

Now, if you are planing to launch yourself on a vlog journey, even if it’s just for the sake of making a name, consider that, as Oliver and Young wrote many decades before even the Internet existed, “t’aint what you do, it’s the way that you do it.” Or, as Crash Course’s John Green always mentions to finish his lessons, “don’t forget to be awesome.”

(A version of this article appeared in Washington Square News.)

The State and Future of Artificial Intelligence, Part 2

(Appeared in Washington Square News.)

February 2011. The news were starting to report protest movements in the Arab World that would eventually be known as the Arab Spring. Right after the 16th of that month, though, some headlines were reserved for the cognitive achievements of a certain character named Watson. Long story short, Watson got 1 million dollars as the first prize of a trivia-type televised game called “Jeopardy!”. While getting such money is usually big news from the winner’s point of view, almost nobody else would have cared if it wasn’t for the fact that Watson was a cluster of 2,880 computer processors with 16 Terabytes of RAM built by IBM.

That is by all means impressive. But as hardware and software quietly and consistently evolve, even such events are ceasing to drive attention. Not long ago, a computer outperformed a highly skilled human in playing Chess. At some point before, a machine computed an approximation of PI more accurate than any brain-bearing creature would be able to obtain in a lifetime. Examples of machines outperforming humans in certain cognitive tasks abound, but can it be said that these machines are intelligent? Well, they are definitely able to apply and acquire knowledge and skills (the definition I grab from my Mac’s dictionary), but let’s bring a more sophisticated view to the table.

In 1950, Alan Turing proposed a test in which the machine should be able to exhibit intelligent behavior as to make it indistinguishable from a actual human. The Turing Test, as it is now known, became a standard for AI in computer science circles. But, as it turns out, that’s not a very reassuring definition. You can easily imagine a setting where, with the proper interface, most people wouldn’t be able to tell if they are playing chess or Jeopardy! against a human or a computer. Besides, computer systems are able to display such level of intelligence in more artistic tasks. For example, researchers at the Sony Computer Science Lab in Paris recently developed an algorithm for Jazz music composition that passed the Turing Test.

That particular experiment made me feel more comfortable with my opinion about Jazz music, but before you hate me, consider this: eventually, machines will probably be able to pass the Turing Test in any cognitive or creative task you can imagine (if, or course, we manage to keep existing as a society that supports scientific research for long enough). Besides, if there have been no news that a software passed the Turing Test for your favorite style of music, that’s probably because there haven’t been people who bothered working on it yet.

Building a drone that resembles and acts like a human (as in that Spielberg’s AI movie) seems much more difficult, but at least in principle (that is, in the form of a mathematical theorem), there’s nothing preventing them from being implemented. If they will ever exist, and how we’ll react to them, it’s time’s burden to answer, and sci-fi authors’ task to imagine. But, so far, there’s little to be said against their inevitability. Perhaps the Watson of the future, after winning a championship, will go out with some friends to celebrate over drinks, and even brag about it in its favorite online social network.

Algorithms and machines that outperform humans in increasingly sophisticated tasks keep appearing. We will know about them, get impressed that we were able to build them, and get mad that some of them ended up stealing our jobs. Then we’ll move on with our lives, eventually getting grumpy that they are not working properly, the same way we get grumpy at people for futilities.

So, yes, it’s fine to call these machines and algorithms intelligent when they pass the criteria of intelligence that we set for them. In fact, it’s important not to forget what they are able to achieve and how they improve our lives. On the other hand, they are constantly pushing the threshold for an activity, or attitude, to be called intelligent, which in turn pushes the limits for what we can do as human beings. That reminds us to be careful and save the adjective for when it is truly appropriate.

Finally, to say that something is intelligent is less risky than saying that it “knows,” it “feels,” it “thinks,” or it is “conscious.” These are deeper philosophical mazes often, and improperly, included in the AI debate. They are the ones who really cause controversy. I’ll conveniently note that this text is approaching the size limit and skip those topics, thus preventing you from getting self-conscious and having existential thoughts. Also, rest assured that this was not written by an AI system designed to convince readers that they are computers. (Subtly smiling.)

The State and Future of Artificial Intelligence, Part 1

(Appeared in Washington Square News.)

It’s an early morning of the just-arrived winter. People I can see on the street from my window wear heavy coats, but it’s unclear how cold it is. I can open the window and let my built-in skin sensors grab an approximate measurement, but I realize a much more accurate value can be obtained by pressing a button. “Siri, what’s the temperature outside?,” I ask, with a brazilian accent most americans I talk with think is russian. “Brr! It’s 32 degrees outside,” answers the piece of rectangular glass I hold. It’s a female voice, with an accent of her own. Artificial. That’s probably how I’d describe it.

The application, acronym for Speech Integration and Recognition Interface, encountered a wave of sarcastic, philosophical, flirtatious, and mundane questions, since it was made natively available to certain iOS devices in October 2011. Countless jokes featuring Siri made their way through the nodes of the social-media graph, and books about her witty personality have been printed. But if you could take Siri in a time trip back to when your grandmother was 10 (fear not, the time travel paradox involves your grandfather), she would definitely fulfill Clarke’s third law and qualify your talking device as “magic”. Perhaps she would even call Siri “intelligent.”

We’ll skip the fact that Siri is intelligent, indeed, according to the definition she grabs from Wolfram Alpha when asked, for there’s no consensus about what it means to be intelligent (nor for what “meaning” means, as a matter of fact; but enough about metalinguistics). In the following, I’ll put Siri in context with recent developments in artificial intelligence. But first, come back from your time travel and book a trip into your brain. This one is easier: simply think of your grandmother.

By doing so, a specific area in the back of your brain, responsible for face recognition, activates. Moreover, it has been conjectured that a single neuron “fires” when you think of her. It’s the “grandmother neuron” and, as the hypothesis goes, there’s one for any particular object you are able to identify. While the existence of such particular neurons is just a conjecture, at least two things about the architecture of the visual cortex have been figured. One, functional specialization: there are areas designated to recognize specific categories of objects (such as faces and places). Two, hierarchical processing: visual information is analyzed in layers, with the level of abstraction increasing as the signal travels deeper in the architecture.

Computational implementations of so-called “deep learning” algorithms have been around for decades, but they were usually outperformed by “shallow” architectures (architectures with only one or two layers). Since 2006, new techniques have been discovered for training deep architectures, and substantial improvements happened in algorithms aimed to tasks that are easily performed by humans, like recognizing objects, voice, faces, and handwritten digits.

Deep learning is used in Siri for speech recognition, and in Google’s Street View for the identification of specific addresses. In 2011, an implementation running in 1,000 computers was able to identify objects from 20,000 different categories with record (despite poor) accuracy. In 2012, a deep learning algorithm won a competition for designing drug agents using a database of molecules and their chemical structures.

These achievements relighted public interest in Artificial Intelligence (well, not as public as in vampire literature, but definitely among computer scientists). Now, while the improvements are substantial, specially when compared to what happened (or didn’t happen) in previous decades, AI still remains in the future. As professor Andrew Ng pointed out, “my gut feeling is that we still don’t quite have the right algorithm yet.”

The reason is that these algorithms are still, in general, severely outperformed by humans. You can recognize your grandmother, for instance, just with a side look, by the way she walks. Computers can barely detect and recognize frontal faces. Similarly for recognizing songs, identifying objects in pictures and movies, and a whole range of other tasks.

Wether comparison with human performance is a good criterion for intelligence is debatable. But I’ll leave that discussion for part 2.

When I Decided to Be a Scientist

My conscious mind never witnessed the emergence of the idea that I wanted to be a scientist. In fact, even upon request, there is no particular event that collapses to assume the moment when I took the decision. Yet, obviously I am attracted to this human endeavor, for the two first sentences alone reveal that I’ve been reading about free will and quantum mechanics, two of the most fascinating topics in science. Or should I say philosophy?

Furthermore, no one who bears a degree in mathematics can securely say he wants to have a career in science before being offered a job at Goldman Sachs. But wait, mathematics is not really a science, is it? I’m looking at you, Gauss and Popper.

When I was a kid living in a farm, every once in a while a saw a grader flattening the road in front of our house. A grader is, you know, a giant tractor with six wheels and a cabin with lots of buttons, switches, and handles. That was one of most sophisticated things my eyes had direct contact to. Driving it should be a lot of fun, I thought, so I remember turning to my dad one day and saying that I wanted to be a grader pilot. He didn’t respond well to the idea and I didn’t become one, but if I were asked to answer when I decided to be a grader pilot at least I’d have half of a story. (Here between us, if you have a friend who drives a grader, would you please let me know? I still want to try it.)

In a sense, though, doing science is like operating a grader. I’m kidding! If I had imagination enough to build such a stretched metaphor, I would be a writer. By the way, I don’t remember deciding to be a writer either, and here I am having to write scientific reports if I want to keep my job. Well, to be honest, I told in my PhD application that I wanted to write scientific divulgation books one day. That’s still true, but it doesn’t mean it will happen.

My point is that things never happen exactly as you want. The best you can do is to tell a story in a way that makes it look like everything happens the way it should. I adapted the last two phrases from the lyrics of a song that I wrote a while ago. You see, I also want to be a rock star. But who doesn’t? In my  teenager daydreams, I played guitar in Avril Lavigne’s band. A detail of minor importance was that she was also my wife. Unsurprisingly, none of these things ended up happening (but Avril, just in case, you know how to contact me).

Now, I won’t judge myself too hard for not having a story. After all, the question of when one knows anything is not a trivial one. If it is possible to observe the moment when knowledge accumulates in the brain and, in affirmative case, how such observation can be done, are still open problems. Will them one day be answered? It’s difficult to say. But if so, you bet it will be due to the work of scientists. There is no sustainable progress in knowledge without some form of scientific investigation. Hey, look, I’ve found an answer to a slightly different question! Will that do?

The Struggle for Life in Planet Science

(Appeared in Washington Square News.)

Face detection, which consists of finding the approximate center points of faces in a digital image, is one of the most notable problems in Computer Vision, a field of research that deals with image understanding. Considerable progress was obtained in the first decade of the present century, and, these days, algorithms for it are widely available in photo-related devices and software applications.

Let us suppose that you find yourself interested in getting informed about the state-of-the-art in face detection. After a quick look on the non-surprisingly-existing Wikipedia article for it (whose text lacks in objectivity and credibility), you might decide to ask some help from Google Scholar or Microsoft Academic Search. The first will get you over 2.8 million results. The second, about 3,800 publications. Now, in order for you not to panic at those numbers and immediately lose interest, you should probably believe that there are better ways of narrowing your search.

Incidentally, there are. You can restrict your search on Google Scholar for results that were published only this year: 45,700. “Keep calm and carry on”, the popular internet meme reminds you. Try excluding patents and citations. 39,600 results. Sigh. Let us see what Microsoft Academic Search has to offer: a list of 598 conferences and 231 journals. “Not so bad”, you think, “they can be sorted by quality, or some other criterium.” You search for “top conferences in computer vision” on Google and get as first result the Microsoft Academic Search link for a list of conferences topped by CVPR (Computer Vision and Pattern Recognition). There are 78 listed publications related with face detection in this conference since 2012. If this doesn’t look scary to you, consider that a publication has in general more than 6 pages and do the math.

The flooding of academic articles is not a problem per se. Except in rare counterexamples suitable for philosophical discussions, in what concerns knowledge, the more, the better. The issue is on quality, of course. Is it too easy to publish, then? Not really. Everyone who ever tried to publish something, even in non-mainstream venues, know that reviewers are not the most kind of people. In fact, gratuitously hostile reviews are not uncommon. Yet, the best scientific works in any topic are obscured by sketches of ideas with only potential usefulness, tedious variations of methods that outperform previous versions by half percent or so, and other findings of dubious significance. If even low quality works pass the thin review filter, the pressure they exert should be very high.

“Pressure” is, perhaps, the key word. In its Principle of Population, Thomas Malthus argues that, due to the pressure of population growth, policies designed to help the poor are destined to fail. The population in case now is that of PhDs. Just to mention two numbers, in the United States about 20,000 PhDs were produced in 2009; and, considering the subset of biological sciences, by 2006 only 15% were in tenure positions six years after graduating*. Most of them end up as postdocs, until they publish enough to get into tenure track, or go to Industry, to steal jobs at which undergrads would do just fine.

History has told us that Malthus’ essay was critical for Darwin’s insight on its theory of Evolution by Natural Selection. According to the later, individuals are doomed to compete with each other for the limited existing food. In times where Evolution by Natural Selection is perhaps the most established of the scientific theories, the fact that life in the scientific world is subject to a Darwinian struggle for survival is, to say the least, disturbing.

* Source: http://www.nature.com/news/2011/110420/full/472276a.html