Tag Archives: apple

Apple Should Have Made iWork Truly Free

In one of its standard product-release events, Apple announced last week an upgrade of it’s line of tablets. The iPad Mini will have higher resolution, and the full-size iPad will be slimmer — to the point of having its name upgraded to iPad Air. Both devices will have better processors and faster internet connections.

Since technology gadgets tend to get smaller and faster, most people were not impressed by last week’s event. Nick Bilton at the New York Times even complained that the Apple event itself was boring, since it followed the same routine of previous ones, designed around the showmanship of the late Steve Jobs.

Mr. Bilton has a point. Nevertheless, Apple in fact announced something interesting and genuinely new last week: the new version of the Mac OS X operating system (called Mavericks) is now free of charge, as it is the iWork suite — a set of softwares aimed at competing with Microsoft Office — for anyone who buys a new Mac, iPhone, or iPad.

The free release of iWork with new devices is very welcome, but Apple would have caused more impact — an actual impact — if it had made the suite free as in “free speech,” not just free as in “free food.” I’m referring, of course, to free software as defined by the Free Software Foundation: a software that allows the users freedom to run, study, copy, modify, improve, and redistribute the product.

The philosophy of free software has been around for about three decades now, and has great audience in software development and academic circles. Besides the appealing features of no charge and freedom to tweak the software to comply with the particular needs of users, proponents argue that there are more subtle advantages with respect to the so called “proprietary software.” According to free software guru Richard Stallman, for instance, free software implies that “much wasteful duplication of system programming effort will be avoided,” so that the effort “can go instead into advancing the state of the art”.

There’s in fact a lot of duplication in what concerns software for office usage, as not only Apple (with iWork) but at least Google (with Docs) and Sun Microsystems (with Open Office) have products similar to Microsoft Office.

Some Mac, iPhone and iPod users will undoubtedly benefit from a free-of-charge, compatible, well designed, office suite, but the chunk of market interested in using a free version of Microsoft Office has likely already been taken by existing high-quality free products, such as Google Docs. Given Apple’s large asset of worldwide developers, and the reach if its products, not only the company but our entire society would do better with an office suite that is truly free.

(A version of this article appeared in Washington Square News.)

Google Glass: Meh…

We’ve been irreversibly spoiled, it seems. Apparently, it has been too long since Apple launched its last market-changing device. The anxiety is causing its stock to fall (it dropped more than 40% over the last seven months), and the quarterly profits of the iPhone maker went downwards for the first time in a decade, the company reported.

Apple is almost certainly working on something new, and speculations abound: a television, a game console, and a smart watch are among the most popular guesses. However, the currently most expected new tech-device is not from Apple, but Google: an internet-connected eyewear known as the Google Glass.

The main idea of the Google Glass is to provide easy access to information of the kind offered by a smartphone, as well as a camera conveniently located for videos. The concept has been around for a while, such as in a ski goggle made by Oakley, for instance, which displays speed, altitude, and incoming text messages.

At first, technologies like the Google Glass seem appealing. For anyone who tried to type a message while walking on the street, it would be interesting to interact with a computer via voice, having visual feedback at the corner of a glass. But while this would prevent people from weirdly walking holding a piece of rectangular glass with two hands, it wouldn’t prevent them from looking like a zombie.

As it turns out, the human visual system cannot focus on two things simultaneously. Speaking of the mentioned ski googles, neuroscientist David Strayer, who for decades studied attention and distraction, warned: “you are effectively skiing blind; you’re going to miss a mogul or hit somebody.” Smart glasses undoubtedly present a risk, whether in practicing some sport, walking on the street, or driving.

The second issue is with purpose. If you look at Google Glass’ website, you’ll see an advertisement campaign centered at the word “share.” Let alone the fact that the term has become a cliche, sharing is only really appealing to people who perform activities where first-person movie-recording is minimally interesting. I mean, if you’re a skydiver or a circus artist, than maybe the Google Glass is for you, otherwise your video won’t get that many “likes” or “+1’s.”

Last, and most important, there’s the concern with privacy. As the Google Glass has a camera, most users will cause others to frown, for they’ll be rightly afraid of being filmed.

Though versions for developers are already available, the hyped Google eyewear is not expected to reach the general market soon, Google’s Eric Schmidt said last month. When it does, I doubt it’s going to be of much success, except perhaps for a niche market. A market for people who like to look cool and record first-person perspectives of the accidents they get involved in.

(A version of this article appeared in Washington Square News.)

A Side Effect of Technological Progress

When Apple launched it’s own navigation app for iPhone, causing countless Internet jokes due to it’s poor quality in comparison with Google Maps, I looked at the event with disdain, not because I’m an Apple fan or because I don’t use maps, but because my way of navigating modern urban landscapes doesn’t rely on any portable digital device.

Like a tobacco addict who always carry a lighter and a pack of cigarets, my pockets always contain an A4-sized paper stolen from the printer’s tray folded thrice, and a pen with supposedly enough ink to last for 7 years that I got from Strand. Before adventuring to a new place, I visit Google Maps on my laptop and draw a small copy of the neighborhood of the target location in one of the unfilled 16 slots of my sheet of paper. “My method,” as they would say in academic publishing circles, never got me lost.

It seems a little outdated, but I don’t do it simply for fashionable purposes. The problem with our times of technology transition is that, though one can find many digital alternatives for what has been historically done with microchip-less devices, many of them do not match the “user experience” of the old “technologies.”

Look at the notes-taking task, for instance. You can find a number of tablets with digital pens in the market, some with only the input interface, some with real-time display, some even with cell-phones included. But often the experience is as bad as that of writing with a nail in marble surface. When the “texture” of the interface is okay, the pen doesn’t quite touch the display: it’s like writing in one side of a glass for the text to appear on the other side. For the alternatives that give up the all-digital goal and adopt a “scanning” kind of approach – the pen has real ink, and writes on real paper, but also localizes itself so that the text can be turned digital – the drawback is that you have to use a special kind of paper, or an additional gadget to localize the pen.

It’s difficult to predict which alternatives will survive, as there are non-technical factors involved: sometimes the company that has the best CEO, not the best product, succeeds. In reality, new technologies never retain all the good qualities of the old ones. Eventually new generations don’t have access to the old way of doing things, not getting to know it was actually better in some sense, and that aspect is forever lost.

We are often too busy rushing towards the future to notice this side effect of progress. Maybe if we spent more time thinking about what we really need, and progressed a bit slower, we would actually get there faster.

(A version of this article appeared in Washington Square News.)

The State and Future of Artificial Intelligence, Part 1

(Appeared in Washington Square News.)

It’s an early morning of the just-arrived winter. People I can see on the street from my window wear heavy coats, but it’s unclear how cold it is. I can open the window and let my built-in skin sensors grab an approximate measurement, but I realize a much more accurate value can be obtained by pressing a button. “Siri, what’s the temperature outside?,” I ask, with a brazilian accent most americans I talk with think is russian. “Brr! It’s 32 degrees outside,” answers the piece of rectangular glass I hold. It’s a female voice, with an accent of her own. Artificial. That’s probably how I’d describe it.

The application, acronym for Speech Integration and Recognition Interface, encountered a wave of sarcastic, philosophical, flirtatious, and mundane questions, since it was made natively available to certain iOS devices in October 2011. Countless jokes featuring Siri made their way through the nodes of the social-media graph, and books about her witty personality have been printed. But if you could take Siri in a time trip back to when your grandmother was 10 (fear not, the time travel paradox involves your grandfather), she would definitely fulfill Clarke’s third law and qualify your talking device as “magic”. Perhaps she would even call Siri “intelligent.”

We’ll skip the fact that Siri is intelligent, indeed, according to the definition she grabs from Wolfram Alpha when asked, for there’s no consensus about what it means to be intelligent (nor for what “meaning” means, as a matter of fact; but enough about metalinguistics). In the following, I’ll put Siri in context with recent developments in artificial intelligence. But first, come back from your time travel and book a trip into your brain. This one is easier: simply think of your grandmother.

By doing so, a specific area in the back of your brain, responsible for face recognition, activates. Moreover, it has been conjectured that a single neuron “fires” when you think of her. It’s the “grandmother neuron” and, as the hypothesis goes, there’s one for any particular object you are able to identify. While the existence of such particular neurons is just a conjecture, at least two things about the architecture of the visual cortex have been figured. One, functional specialization: there are areas designated to recognize specific categories of objects (such as faces and places). Two, hierarchical processing: visual information is analyzed in layers, with the level of abstraction increasing as the signal travels deeper in the architecture.

Computational implementations of so-called “deep learning” algorithms have been around for decades, but they were usually outperformed by “shallow” architectures (architectures with only one or two layers). Since 2006, new techniques have been discovered for training deep architectures, and substantial improvements happened in algorithms aimed to tasks that are easily performed by humans, like recognizing objects, voice, faces, and handwritten digits.

Deep learning is used in Siri for speech recognition, and in Google’s Street View for the identification of specific addresses. In 2011, an implementation running in 1,000 computers was able to identify objects from 20,000 different categories with record (despite poor) accuracy. In 2012, a deep learning algorithm won a competition for designing drug agents using a database of molecules and their chemical structures.

These achievements relighted public interest in Artificial Intelligence (well, not as public as in vampire literature, but definitely among computer scientists). Now, while the improvements are substantial, specially when compared to what happened (or didn’t happen) in previous decades, AI still remains in the future. As professor Andrew Ng pointed out, “my gut feeling is that we still don’t quite have the right algorithm yet.”

The reason is that these algorithms are still, in general, severely outperformed by humans. You can recognize your grandmother, for instance, just with a side look, by the way she walks. Computers can barely detect and recognize frontal faces. Similarly for recognizing songs, identifying objects in pictures and movies, and a whole range of other tasks.

Wether comparison with human performance is a good criterion for intelligence is debatable. But I’ll leave that discussion for part 2.