Quantum Methods in Computer Vision

One of the most popular accounts of the “spooky” behavior of particles in quantum physics is the so called double-slit experiment. Originally designed by Thomas Young in the early 1800s to demonstrate the wave nature of light, it was adopted and extended by Richard Feynman in the late 1900s as a thought experiment to highlight the bizarre behavior of subatomic particles.

The setting is simple: an electron gun shooting electrons at at wall with two slits, and an observing screen behind the wall to capture the statistical behavior of electrons that pass through the slits.

Setup

According to our intuition, the distribution of electrons in the back screen should be a bell- shaped curve, resulting from the sum of two bell shaped curves corresponding to the pattern of electrons passing through each slit individually.

Intuition

But, as it turns out, what is observed is a wave-interference pattern, like the one we would expect if the electron gun were replaced by a wave source, and the observing screen measured the maximum height of the hitting wave.

Reality

Mathematically, this means that the probability that an electron will arrive at the observing screen at various points should be modeled in a way that allows for interference to happen. This is done by using probability amplitudes that are complex waves (waves of complex numbers), and computing the final probability as the magnitude square of a sum of waves.

Thus, in order to explain quantum phenomena, physicists had to introduce new types of probability functions, such that the combination of probabilities would allow for cancelation effects to occur. In “classical” probability calculations, distributions can only ad up.

Of course, probabilities are in the toolboxes of many branches of science, not only in quantum physics. But outside physics there hasn’t been a need to use the more strange type so far. Recently, however, we decided to adopt the wave kind of probability functions for problems in computer science. In a paper recently published at the International Conference on Image Processing, we show how the idea works for a particular problem in computer vision.

We adapted a classical algorithm to detect circles in images, known as Hough Transform. In this method, each edge point contributes a probability distribution (a set of guesses) for the location of the center of the circle it belongs to. This distribution is constant in a line perpendicular to the edge.

Tangent

When the probabilities corresponding to all edges are added, the center of the circles stands out.

ExSynth

This works well for simple, clean cases. But problems arise in more complex, real world images, such as this picture of a mouse embryo.

ExReal

The main issue is that there are many edges not from the boundaries of cells contributing votes. Replacing the default constant votes by wave probabilities, the final sum has much less clutter, due to the cancelations that result from wave interference. The centers still stand out because the waves reaching them are “in phase.”

ParamDomain015A

In the article, titled Complex-Valued Hough Transforms for Circles, we show that the modified algorithm is not only visually, but also statistically better than the classic one, as seen in experiments with hundreds of natural and synthetic images.

We believe quantum physics could potentially contribute many ideas for better algorithms in computer science, not only be regarded as an eventual means for obtaining high computing power in the form of quantum computing. Our current research, for instance, focuses on the mathematical formalism of spin and its use to detect symmetric shapes in images.

Link to Conference Paper

Reason Versus Intuition

Emotion and intuition are “built in” mechanisms for decision making. Feelings like fear, anxiety, and empathy, tell right away something about a situation, and what should be done about it.

In contrast, one can of course use reason. Weight carefully all the factors involved, and analyze possible outcomes.

For big matters such as careers and relationships, it seems that reason would be the best choice, since what one “feels” like looks a bit too primitive and simplistic a method to rely on.

Not so, according to some prominent public figures:

Sigmund Freud: “When making a decision of minor importance, I have always found it advantageous to consider all the pros and cons. In vital matters, however, such as the choice of a mate or a profession, the decision should come from the unconscious, from somewhere within ourselves. In the important decisions of our personal life, we should be governed, I think, by the deep inner needs of our nature.”

Steve Jobs: “Have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.”

Jim Carrey: “My father could have been a great comedian, but he didn’t believe that was possible for him, and so he made a conservative choice. Instead, he got a safe job as an accountant, and when I was 12 years old, he was let go from that safe job and our family had to do whatever we could to survive. I learned many great lessons from my father, not the least of which was that you can fail at what you don’t want, so you might as well take a chance on doing what you love.”

Of course, these are people who “made it” doing what they love(d), so they are biased. Not everyone who wants to be a comedian (or entrepreneur, or psychologist) does well, including those who have chosen their professions by gut feeling. What is true is that in order to succeed (whatever that means), a lot of challenges should be overcome, and we’re more willing to work hard if we like what we do. And “to like” is a feeling.

The Problem With Stories: A Short Story

Person X, after overcoming a big challenge (getting rich, finding love, fighting depression, you name it), feels compelled to tell everyone how the problem was solved.

Person Y, struggling with named challenge, bumps into Person X’s account of how to go about it, and dives into the story.

Problem: X and Y, though having faced similar challenges, have, in most cases, different backgrounds, live in different contexts, and are subject to different random events.

Discussion: We are hardwired to like stories, and that was arguably a good thing when human tribes had at most a few hundred people. But in the world of big urban centers with millions, personal stories are usually mere single samples in a statistical distribution of very high variance.

The School of Life

I cannot recommend Alain de Botton‘s The School of Life​ enough. I know of no other thinkers who approach and understand the wide range of pains of the modern world as profoundly, yet keeping a sober optimism about the future.

This is a fine example of their work, a short video essay on the dark side of meritocracy: https://www.youtube.com/watch?v=bTDGdKaMDhQ

YouTube Channel: https://www.youtube.com/channel/UC7IcJI8PUf5Z3zKxnZvTBog

Text essays: http://www.thebookoflife.org

Online Resources

Curation seems to be a thing of the current online zeitgeist. In this spirit, here I list 5 of my favorite online locations, 3 of which are relatively unknown.

News: BBC. Comprehensive, general, and free.

Social network: Twitter. It might not ‘get’ business as Facebook does, but it’s more conductive and nurturing to highbrow content.

How to live: The Book of Life and The School of Life. A great project by philosopher Alain de Botton.

The business of technology: Stratechery. A unique perspective on what makes the technology world move, by Ben Thompson.

History: Hardcore History. An incredibly engaging podcast by self-declared ‘amateur’ historian Dan Carlin.

A.I. Paranoia in the News

What’s the difference between the following sentences? (a) Robot Kills Man; (b) Man Killed by Robot. In this case it’s not simply a question of active versus passive voice. It also implies intention, which, despite the current hype, robots do not possess. According to many media outlets, however, the death of a worker in an incident with a factory robot earlier this month apparently wasn’t sad enough to break into news, and they had to insert some elements of evil Artificial Intelligence to announce the story. The list below is a sample of how different news websites titled the corresponding piece.

Fortune: “Worker killed by robot at VW plant”

Ars Technica: “Man killed by a factory robot in Germany”

The Independent: “Worker killed by robot at Volkswagen car factory”

BBC: “Man crushed to death by robot at car factory”

Gawker: “Worker Crushed to Death by Robot in Volkswagen Plant”

Geek.com: “Volkswagen robot factory worker takes a human life”

NBC News: “Robot Crushes Contractor to Death at VW Motor Plant in Germany”

NY Post: “Robot kills man at Volkswagen plant”

USA Today: “Robot hits, kills Volkswagen worker”

The Guardian: “Robot kills worker at Volkswagen plant in Germany”

Mashable: “Robot kills man at Volkswagen plant in Germany”

Business Insider: “A robot killed a factory worker in Germany”

CNN: “Car assembly line robot kills worker in Germany”

Time: “Robot Kills Man at Volkswagen Plant”

Telegraph: “Robot kills man at Volkswagen plant in Germany”

NY Times: “Robot Kills Man at Volkswagen Plant in Germany”

The Register: “Rise of the Machines: ROBOT KILLS MAN at Volkswagen plant”

Breitbart: “AS FEARS OVER ARTIFICIAL INTELLIGENCE GROW, A ROBOT KILLS A WORKER IN A GERMAN CAR FACTORY”

On the Reproducibility Crisis in Science

In the last few months, a number of articles discussing the problem of reproducibility in science appeared. Writing for the website phys.org in September, Fiona Fidler and Ascelin Gordon called it the “reproducibility crisis.” A piece on The Economist of last October emphasized how this harms science’s capability to correct itself. In the same month, Los Angeles Times’ Michael Hiltzik wrote on how “researchers are rewarded for splashy findings, not for double-checking accuracy.”

Reproducibility is one of the main pillars of the scientific method. Its meaning is straightforward: scientists should devise experiments and studies that can be repeated by themselves and other researchers working independently, and the obtained results should be the same. If a study proposes a drug for a certain disease, for instance, it is clear why the method should be tested many times before the medication appears in drugstores. But lately there have been alarming news regarding the practice.

Michael Hiltzik cites a project where biotech firm Amgen tried to reproduce 53 important studies on cancer and blood biology. Only 6 of them lead to similar results. In another example, a company in Germany found that only a quarter of the published research backing its R&D projects could be reproduced.

The consequences are clearly bad for the public in general, but they are also harming for science, because it impairs its reputation, which is already under severe attack on the topics of global warming and evolution by natural selection.

The blame should also be shared between scientists and the general public. Due to people’s restless desire for novelty, and journalists eager to attend that demand, science news have their own section in major newspapers, where the curiousness of the discovery is more important than how reproducible the study is. The scientists fault is in that research quality is measured essentially by number of publications, with little attention on reproducibility.

One initiative to solve the problem is the creation of a “reproducibility index,” to be attached to journals in a way similar to the “impact factor”. In Computer Science, researcher Jake Vanderplas proposed that the code used to produce the results should be well-documented, well-tested, and publicly available. Vanderplas argues that this “is essential not only to reproducibility in modern scientific research, but to the very progression of research itself.”

Similar to how a successful smartphone is accompanied with the so-called ecosystem of apps, developers, and media content, science venues should also provide tools for easier reproducibility. Data, code, and other items additional to the main manuscript should be enforced. Rather than provide extra burden for researchers, this would facilitate validation and comparison of results, and speed up the pace in which important discoveries happen.

(A version of this article appeared in Washington Square News.)

Social Bots Threat to Online Interactions

If you use Facebook, you might have noticed in the last few weeks a number of bizarre posts from an app called What Would I Say. It was implemented by a team of grad-students in a hackathon in Princeton less than three weeks ago, and it’s the latest internet trend to spread like wildfire. What Would I Say is yet another addition to the frugality, non-seriousness, of online social networks. But worse yet, it highlights a technology which might finally undermine the possibility of any genuine online interaction at all: social bots.

What Would I Say is a bot that screens your Facebook posts, builds a probabilistic model for sequences of words, and outputs the most likely sequence. The idea is not new, since there’s an equivalent for Twitter, called That Can Be My Next Tweet. So far these apps have been mainly a source of entertainment — producing ironic sentences such as “we can’t do it,” from Barack Obama’s posts — but there are already attempts to use the technology seriously: according to a BBC article published last week, Google just patented a bot to mimic a person’s behavior in online social networks.

That’s disappointing. While the technology behind social bots do have high scientific value — mainly in the context of Natural Language Processing — attempts to seriously turn these ideas into products are repugnant, not because models to generate text are still in rudimentary stage — which they are — but because it makes the internet a lot less humane than it already is.

In fact, the low audience of attempts to seriously discuss relevant issues with friends online, and the hostility of conversations with strangers in web forums, make it very stressful the effort of getting anything meaningful from online interactions. Knowing that what remains of those interactions might be realized by a computer algorithm will only further erode online conversations, to the point none will be left.

Twenty years ago Peter Steiner published a cartoon on The New Yorker in which a dog says to another: “On the Internet, nobody knows you’re a dog.” It’s unbelievable that some people are taking the statement seriously. Aren’t the automatic birthday messages from local business already annoying enough? How good would you feel to discover that the nice birthday message sent by your crush on Facebook was actually written by a bot?

The faking of one’s feelings towards others is already implemented online by those who display such behavior in the real world. We don’t need bots to create more false expectations on people. Bots are great to find things, and organize data. Having them to mimic our social behavior online will only cause the social aspect of the internet to vanish.

(A version of this article appeared in