Tag Archives: technology

There Should Be More Women in Computer Science

What would be of TV comedy without stereotypes? Certainly very different, considering that the most watched sitcom in the past season relies heavily on the view that a blonde girl’s I.Q. could be higher, and that tech and science oriented guys have disastrous performance in the social arena.

Although shows like The Big Bang Theory (and the british-equivalent, The IT Crowd) obviously do not aim at influencing the career paths of young teenagers, unfortunately they fuel the perception that tech and science are for socially-awkward geeks — a perception that drives people away from these fields, specially women in Computer Science.

According to the The National Center for Women & Information Technology, in 2010 only 18 percent of Computer and Information Science graduates were women. Down from 37 percent, in 1985. And data from the Higher Education Statistics Agency shows the downhill-trend continuing: 17.4 percent in 2012.

Why bother this gender inequality in IT? For two main reasons. The first concerns the benefits of diversity in the workforce: a more diverse pool implies a higher probability of reaching qualified workers, fostering creativity and innovation. Second, and more important, since technology is pervasive in all professions and personal activities, a lot of women’s demands are being neglected, for computational tools are developed by a working class formed mostly by men.

Women certainly have the skills required. In fact, according to a recent study in the U.K., based on the BTEC vocational qualifications exam, girls outperform boys in skills-based science and technology subjects. In particular, as reported by BBC, 15% of the girls taking the more challenging level gained the top grade — compared to 12% of the boys.

Aware of the problem, some schools are taking action to get more women in technology. At University of Texas (Austin) and Virginia Tech, for instance, new female students are put in shared housing with more experienced engineering students of the same gender. The initiative reduces intimidation (with respect to the male majority) and creates a sense of community. At NYU Poly, a summer program on cyber-security designed only for high-school girls was offered this year. Many attendees welcomed the initiative, claiming they feel more comfortable without the company of guys who “brag so much.” Similarly, Cornel NYC Tech teamed with nonprofit Girls Who Code to offer an 8-week intensive Computer Science course for middle school girls.

These are all interesting ideas, but unless they are adopted by most schools, hardly the gender gap numbers will change significantly. Since the nature of the problem is cultural, it would be better if big influencers, and women themselves, approached tech-related occupations in a way that is not so biased. Considering the enormous gender gap, women’s capacity, and the fact that IT occupations are projected to grow by 22 percent from 2010 to 2020 in the U.S., adding 758,800 new jobs (according to the Bureau of Labor Statistics), Computer Science is now probably the best option for girls deciding on their careers.

(A version of this article appeared in Washington Square News.)

Google Glass: Meh…

We’ve been irreversibly spoiled, it seems. Apparently, it has been too long since Apple launched its last market-changing device. The anxiety is causing its stock to fall (it dropped more than 40% over the last seven months), and the quarterly profits of the iPhone maker went downwards for the first time in a decade, the company reported.

Apple is almost certainly working on something new, and speculations abound: a television, a game console, and a smart watch are among the most popular guesses. However, the currently most expected new tech-device is not from Apple, but Google: an internet-connected eyewear known as the Google Glass.

The main idea of the Google Glass is to provide easy access to information of the kind offered by a smartphone, as well as a camera conveniently located for videos. The concept has been around for a while, such as in a ski goggle made by Oakley, for instance, which displays speed, altitude, and incoming text messages.

At first, technologies like the Google Glass seem appealing. For anyone who tried to type a message while walking on the street, it would be interesting to interact with a computer via voice, having visual feedback at the corner of a glass. But while this would prevent people from weirdly walking holding a piece of rectangular glass with two hands, it wouldn’t prevent them from looking like a zombie.

As it turns out, the human visual system cannot focus on two things simultaneously. Speaking of the mentioned ski googles, neuroscientist David Strayer, who for decades studied attention and distraction, warned: “you are effectively skiing blind; you’re going to miss a mogul or hit somebody.” Smart glasses undoubtedly present a risk, whether in practicing some sport, walking on the street, or driving.

The second issue is with purpose. If you look at Google Glass’ website, you’ll see an advertisement campaign centered at the word “share.” Let alone the fact that the term has become a cliche, sharing is only really appealing to people who perform activities where first-person movie-recording is minimally interesting. I mean, if you’re a skydiver or a circus artist, than maybe the Google Glass is for you, otherwise your video won’t get that many “likes” or “+1’s.”

Last, and most important, there’s the concern with privacy. As the Google Glass has a camera, most users will cause others to frown, for they’ll be rightly afraid of being filmed.

Though versions for developers are already available, the hyped Google eyewear is not expected to reach the general market soon, Google’s Eric Schmidt said last month. When it does, I doubt it’s going to be of much success, except perhaps for a niche market. A market for people who like to look cool and record first-person perspectives of the accidents they get involved in.

(A version of this article appeared in Washington Square News.)

A Side Effect of Technological Progress

When Apple launched it’s own navigation app for iPhone, causing countless Internet jokes due to it’s poor quality in comparison with Google Maps, I looked at the event with disdain, not because I’m an Apple fan or because I don’t use maps, but because my way of navigating modern urban landscapes doesn’t rely on any portable digital device.

Like a tobacco addict who always carry a lighter and a pack of cigarets, my pockets always contain an A4-sized paper stolen from the printer’s tray folded thrice, and a pen with supposedly enough ink to last for 7 years that I got from Strand. Before adventuring to a new place, I visit Google Maps on my laptop and draw a small copy of the neighborhood of the target location in one of the unfilled 16 slots of my sheet of paper. “My method,” as they would say in academic publishing circles, never got me lost.

It seems a little outdated, but I don’t do it simply for fashionable purposes. The problem with our times of technology transition is that, though one can find many digital alternatives for what has been historically done with microchip-less devices, many of them do not match the “user experience” of the old “technologies.”

Look at the notes-taking task, for instance. You can find a number of tablets with digital pens in the market, some with only the input interface, some with real-time display, some even with cell-phones included. But often the experience is as bad as that of writing with a nail in marble surface. When the “texture” of the interface is okay, the pen doesn’t quite touch the display: it’s like writing in one side of a glass for the text to appear on the other side. For the alternatives that give up the all-digital goal and adopt a “scanning” kind of approach – the pen has real ink, and writes on real paper, but also localizes itself so that the text can be turned digital – the drawback is that you have to use a special kind of paper, or an additional gadget to localize the pen.

It’s difficult to predict which alternatives will survive, as there are non-technical factors involved: sometimes the company that has the best CEO, not the best product, succeeds. In reality, new technologies never retain all the good qualities of the old ones. Eventually new generations don’t have access to the old way of doing things, not getting to know it was actually better in some sense, and that aspect is forever lost.

We are often too busy rushing towards the future to notice this side effect of progress. Maybe if we spent more time thinking about what we really need, and progressed a bit slower, we would actually get there faster.

(A version of this article appeared in Washington Square News.)