Tag Archives: science

On the Reproducibility Crisis in Science

In the last few months, a number of articles discussing the problem of reproducibility in science appeared. Writing for the website phys.org in September, Fiona Fidler and Ascelin Gordon called it the “reproducibility crisis.” A piece on The Economist of last October emphasized how this harms science’s capability to correct itself. In the same month, Los Angeles Times’ Michael Hiltzik wrote on how “researchers are rewarded for splashy findings, not for double-checking accuracy.”

Reproducibility is one of the main pillars of the scientific method. Its meaning is straightforward: scientists should devise experiments and studies that can be repeated by themselves and other researchers working independently, and the obtained results should be the same. If a study proposes a drug for a certain disease, for instance, it is clear why the method should be tested many times before the medication appears in drugstores. But lately there have been alarming news regarding the practice.

Michael Hiltzik cites a project where biotech firm Amgen tried to reproduce 53 important studies on cancer and blood biology. Only 6 of them lead to similar results. In another example, a company in Germany found that only a quarter of the published research backing its R&D projects could be reproduced.

The consequences are clearly bad for the public in general, but they are also harming for science, because it impairs its reputation, which is already under severe attack on the topics of global warming and evolution by natural selection.

The blame should also be shared between scientists and the general public. Due to people’s restless desire for novelty, and journalists eager to attend that demand, science news have their own section in major newspapers, where the curiousness of the discovery is more important than how reproducible the study is. The scientists fault is in that research quality is measured essentially by number of publications, with little attention on reproducibility.

One initiative to solve the problem is the creation of a “reproducibility index,” to be attached to journals in a way similar to the “impact factor”. In Computer Science, researcher Jake Vanderplas proposed that the code used to produce the results should be well-documented, well-tested, and publicly available. Vanderplas argues that this “is essential not only to reproducibility in modern scientific research, but to the very progression of research itself.”

Similar to how a successful smartphone is accompanied with the so-called ecosystem of apps, developers, and media content, science venues should also provide tools for easier reproducibility. Data, code, and other items additional to the main manuscript should be enforced. Rather than provide extra burden for researchers, this would facilitate validation and comparison of results, and speed up the pace in which important discoveries happen.

(A version of this article appeared in Washington Square News.)

Diseases That Affect Modern Scientists

In 1898, Spanish scientist Santiago Ramon y Cajal published an essay titled Diseases of The Will, in which he described some virtual illness that hit 19th century scientists, turning them into “contemplators,” “bibliophiles,” “theorists,” and other caricatures. A little over a decade into the 21st century, a new group of diseases has emerged as threats to the progress of science. Three of them stand out: Productivity Addiction, Tenure Leisure, and Financial Anxiety.

Productivity Addiction affects early career scientists. Its main cause is the wide gap between the number of job offers in academia (few) and the number of PhDs that are thrown in the market every year (thousands). As a result of competition, young researchers find themselves hostages to academic publications, and research interests are determined less by the appeal of a topic than by what a journal with high impact factor wants. The main symptom of this disease is a Curriculum Vitae with five or more publications per year, either on a similar topic with largely overlapping content, or in widely different topics but solving a range of disparate problems without any clear line of research.

While Productivity Addiction can be chronic, some individuals are successfully treated by a procedure called Tenure. Tenure provides the researcher with job stability, so that he can be free to work on problems that are inherently interesting, regardless of funding or other factors. Unfortunately, the procedure can cause a severe side effect, known as Tenure Leisure. This disease manifests itself as a profound lack of interest for research: after years of too much hard work, the patient loses the drive required to pursue science. As in the previous disease, this one can also be diagnosed by close examination of the individual’s CV. One or less yearly publications after a decade of high productivity is a strong indicator of Tenure Leisure. Sadly, there is no cure.

The last important disease is more unpredictable in terms of onset: it can hit scientists from the early years of PhD until later on their careers, and in some cases even past retirement. It is manifested by the realization that life is too short, and that the universe’s puzzles are endless anyway. The so-called Financial Anxiety is triggered by the observation that there are other jobs which, intriguingly, do not require as much mental effort but pay much better. Upon infection, the illness is fatal to one’s contribution to science. Sadly, like Productivity Addiction, Financial Anxiety is very contagious.

Such diseases are not necessarily the scientists fault. Rather, they are a consequence of the times we live in. Young researchers who want to pursue a career in science, for instance, have little chance of escaping from Productivity Addiction.

(A version of this article appeared in Washington Square News.)

What is Mathematics?

When I was in math grad school, every time I hung out with colleagues to a bar or restaurant, we had a hard time splitting the check, because, as somebody would eventually mention, mathematicians are horrible at doing arithmetics, and basic math in general.

Also surprising is the fact that everyone who studies advanced math ends up knowing the whole Greek alphabet. In fact, Greek letters are so used in math and sciences requiring mathematical modeling, that it led cartoonist Zach Weiner to joke: “Isn’t it weird how the Greek language is entirely made up of physics symbols?”

Now, of course, getting to know the Greek alphabet as a byproduct, and not being able to do basic math, are not a very good description for what a mathematician is. Perhaps here Paul Erdos gets a bit closer: “a mathematician is a device for turning coffee into theorems.”

And what about mathematics itself? While it is true that it’s something concerned with arithmetics, involving Greek letters, and centered on theorems, those are merely some descriptive features. Let’s get a little more conceptual.

To begin with, math is often regarded as a science. Carl Gauss, one of the greatest mathematicians of all time, went even further: “mathematics is the queen of the sciences,” he’s reported to have said. Well, Gauss, I’m sorry to inform you that this is not the case: math is not a science. The reason is that mathematical theories, once stablished, are not falsifiable. And “falsifiability,” a concept introduced by philosopher of science Karl Popper, is widely regarded as one of the main characteristics of a scientific theory.

Another common notion associated with mathematics is that it is a language. Cut to Galileo Galilei: “The universe cannot be read until we have learned the language and become familiar with the characters in which it is written. It is written in mathematical language and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word.”

No, Mr. Galilei. And if the Vatican decides to disagree with you on that, I have to support them this time around. A language is simply a tool for communication, for information transfer. Although it is true that mathematical notation allows talking about things in an idiosyncratic fashion, this is a very narrow picture of what mathematics is about.

Mathematics is a representation of reality, particularly suitable for computations. More then describing an object, math is concerned with what you can (and often, what you can’t) do with it. Take the circle of radius 1, centered at (0,0), for example. There are many different ways of representing it. Here are three. (1) the “parametric” way: how the function that turns x into the pair (cos(x),sin(x)) acts on the line of real numbers. (2) the “implicit” way: the set of points (x,y) in the plane whose distance to the point (0,0) is exactly 1. (3) the “limit” way: a circle is the limit of an infinite sequence of regular polygons subscribed in it, where every polygon on the sequence has one more edge with respect to its predecessor.

Some representations are more useful then others in certain situations. For instance, a representation like (3) is preferred in computer graphics for drawing a circle on a screen. On the other hand, (1), extended to an arbitrary circle, is better to compute the centripetal force in a curve.

An important fact to notice is that our brains also “implement” representations of objects, although the way in which we mentally represent concepts, and use them in computations, remain a mystery.

In a sense, mathematical constructs are an extension of that representation. When our ancestors were using rocks to help counting cheep (what is one of the earliest manifestations of mathematics), they were, so to speak, extending the representation and computing capabilities of their brains to outside their heads.

Next time you feel like having an extracorporeal experience, go solve a mathematical puzzle.

(An updated version of this article appeared in Washington Square News.)

We Need Newer Atheists

With the spring of modern science, the “God hypothesis” — as famously referred to by the French mathematician Pierre-Simon Laplace — turned out less and less necessary to explain natural events. The phenomenon caused religiosity to decline, specially among educated people. As secular States developed and freedom of expression emerged as a human right, criticism of religion, both by philosophers and scientists, inevitably became widespread.

In this sense, the movement led by Richard Dawkins, Dan Dennett, Sam Harris, and the late Christopher Hitchens — the so called New Atheists — is not new; although it was triggered by a new use of the power of religion: the terrorist attacks of 9/11 wouldn’t have happened if it weren’t for the concept of reward in an afterlife, and blind faith on it.

On a sober review, the New Atheists present a modern, updated, supporting framework, for the individual that’s able to overcome the frightening concept of eternal punishment, often solidified during a religious childhood, and to realize that “everything is possible” or “God works in mysterious ways” are merrily shortcuts for intellectual escapism. Their books provide excellent guidance out of the mazes and away from the traps of faith in its pure form.

However, despite more recent attempts, such as Dawkins’ The Magic of Reality and Harris’ Free Will, the New Atheists, or the “old” ones for that matter, don’t have the same level of success in laying the foundations for a fulfilling life with atheism.

As it turns out, the pursuit of happiness is a journey far more complex than the logic of the involved chemical components and the understanding of the evolutionary processes behind them. Throughout history, besides the supernatural-related affairs, religion took care of many aspects related with the self and life in society, often times providing valuable answers, in the individual and collective realms, as in the practice of meditation, and the act of gathering to celebrate a common purpose, for instance.

But as technology gets more powerful, the world more connected, and cultures clash in unprecedented scale, it becomes too dangerous to leave guidance on these complicated topics to people who base their rationale on dogmas, supposedly sacred books from the ages of tribalism, dreams, and entities undistinguishable from imaginary friends.

This gap has been recognized. We can see the emergence of interesting movements, such as Secular Humanism, and thinkers are starting to approach the topic — see, for instance, recent work by the British philosopher Alain de Botton. Unfortunately, though, there is no solid, structured proposal, and the average atheist is in great part left to find his way through life individually, having to learn and test alternatives by himself.

In part, this is due to the fact that the territory of mind is largely unknown, and a honest, confident answer, can’t be formulated yet. It’s good that we treat the topic carefully, but we need more people, “newer” atheists, to start thinking and acting towards that goal. Because, as the cliche goes, life is too short — a fact we atheists are very aware of.

(A version of this article appeared in Washington Square News.)

When I Decided to Be a Scientist

My conscious mind never witnessed the emergence of the idea that I wanted to be a scientist. In fact, even upon request, there is no particular event that collapses to assume the moment when I took the decision. Yet, obviously I am attracted to this human endeavor, for the two first sentences alone reveal that I’ve been reading about free will and quantum mechanics, two of the most fascinating topics in science. Or should I say philosophy?

Furthermore, no one who bears a degree in mathematics can securely say he wants to have a career in science before being offered a job at Goldman Sachs. But wait, mathematics is not really a science, is it? I’m looking at you, Gauss and Popper.

When I was a kid living in a farm, every once in a while a saw a grader flattening the road in front of our house. A grader is, you know, a giant tractor with six wheels and a cabin with lots of buttons, switches, and handles. That was one of most sophisticated things my eyes had direct contact to. Driving it should be a lot of fun, I thought, so I remember turning to my dad one day and saying that I wanted to be a grader pilot. He didn’t respond well to the idea and I didn’t become one, but if I were asked to answer when I decided to be a grader pilot at least I’d have half of a story. (Here between us, if you have a friend who drives a grader, would you please let me know? I still want to try it.)

In a sense, though, doing science is like operating a grader. I’m kidding! If I had imagination enough to build such a stretched metaphor, I would be a writer. By the way, I don’t remember deciding to be a writer either, and here I am having to write scientific reports if I want to keep my job. Well, to be honest, I told in my PhD application that I wanted to write scientific divulgation books one day. That’s still true, but it doesn’t mean it will happen.

My point is that things never happen exactly as you want. The best you can do is to tell a story in a way that makes it look like everything happens the way it should. I adapted the last two phrases from the lyrics of a song that I wrote a while ago. You see, I also want to be a rock star. But who doesn’t? In my  teenager daydreams, I played guitar in Avril Lavigne’s band. A detail of minor importance was that she was also my wife. Unsurprisingly, none of these things ended up happening (but Avril, just in case, you know how to contact me).

Now, I won’t judge myself too hard for not having a story. After all, the question of when one knows anything is not a trivial one. If it is possible to observe the moment when knowledge accumulates in the brain and, in affirmative case, how such observation can be done, are still open problems. Will them one day be answered? It’s difficult to say. But if so, you bet it will be due to the work of scientists. There is no sustainable progress in knowledge without some form of scientific investigation. Hey, look, I’ve found an answer to a slightly different question! Will that do?

The Struggle for Life in Planet Science

(Appeared in Washington Square News.)

Face detection, which consists of finding the approximate center points of faces in a digital image, is one of the most notable problems in Computer Vision, a field of research that deals with image understanding. Considerable progress was obtained in the first decade of the present century, and, these days, algorithms for it are widely available in photo-related devices and software applications.

Let us suppose that you find yourself interested in getting informed about the state-of-the-art in face detection. After a quick look on the non-surprisingly-existing Wikipedia article for it (whose text lacks in objectivity and credibility), you might decide to ask some help from Google Scholar or Microsoft Academic Search. The first will get you over 2.8 million results. The second, about 3,800 publications. Now, in order for you not to panic at those numbers and immediately lose interest, you should probably believe that there are better ways of narrowing your search.

Incidentally, there are. You can restrict your search on Google Scholar for results that were published only this year: 45,700. “Keep calm and carry on”, the popular internet meme reminds you. Try excluding patents and citations. 39,600 results. Sigh. Let us see what Microsoft Academic Search has to offer: a list of 598 conferences and 231 journals. “Not so bad”, you think, “they can be sorted by quality, or some other criterium.” You search for “top conferences in computer vision” on Google and get as first result the Microsoft Academic Search link for a list of conferences topped by CVPR (Computer Vision and Pattern Recognition). There are 78 listed publications related with face detection in this conference since 2012. If this doesn’t look scary to you, consider that a publication has in general more than 6 pages and do the math.

The flooding of academic articles is not a problem per se. Except in rare counterexamples suitable for philosophical discussions, in what concerns knowledge, the more, the better. The issue is on quality, of course. Is it too easy to publish, then? Not really. Everyone who ever tried to publish something, even in non-mainstream venues, know that reviewers are not the most kind of people. In fact, gratuitously hostile reviews are not uncommon. Yet, the best scientific works in any topic are obscured by sketches of ideas with only potential usefulness, tedious variations of methods that outperform previous versions by half percent or so, and other findings of dubious significance. If even low quality works pass the thin review filter, the pressure they exert should be very high.

“Pressure” is, perhaps, the key word. In its Principle of Population, Thomas Malthus argues that, due to the pressure of population growth, policies designed to help the poor are destined to fail. The population in case now is that of PhDs. Just to mention two numbers, in the United States about 20,000 PhDs were produced in 2009; and, considering the subset of biological sciences, by 2006 only 15% were in tenure positions six years after graduating*. Most of them end up as postdocs, until they publish enough to get into tenure track, or go to Industry, to steal jobs at which undergrads would do just fine.

History has told us that Malthus’ essay was critical for Darwin’s insight on its theory of Evolution by Natural Selection. According to the later, individuals are doomed to compete with each other for the limited existing food. In times where Evolution by Natural Selection is perhaps the most established of the scientific theories, the fact that life in the scientific world is subject to a Darwinian struggle for survival is, to say the least, disturbing.

* Source: http://www.nature.com/news/2011/110420/full/472276a.html