One of the most popular accounts of the “spooky” behavior of particles in quantum physics is the so called double-slit experiment. Originally designed by Thomas Young in the early 1800s to demonstrate the wave nature of light, it was adopted and extended by Richard Feynman in the late 1900s as a thought experiment to highlight the bizarre behavior of subatomic particles.

The setting is simple: an electron gun shooting electrons at at wall with two slits, and an observing screen behind the wall to capture the statistical behavior of electrons that pass through the slits.

According to our intuition, the distribution of electrons in the back screen should be a bell- shaped curve, resulting from the sum of two bell shaped curves corresponding to the pattern of electrons passing through each slit individually.

But, as it turns out, what is observed is a wave-interference pattern, like the one we would expect if the electron gun were replaced by a wave source, and the observing screen measured the maximum height of the hitting wave.

Mathematically, this means that the probability that an electron will arrive at the observing screen at various points should be modeled in a way that allows for interference to happen. This is done by using probability amplitudes that are complex waves (waves of complex numbers), and computing the final probability as the magnitude square of a sum of waves.

Thus, in order to explain quantum phenomena, physicists had to introduce new types of probability functions, such that the combination of probabilities would allow for cancelation effects to occur. In “classical” probability calculations, distributions can only ad up.

Of course, probabilities are in the toolboxes of many branches of science, not only in quantum physics. But outside physics there hasn’t been a need to use the more strange type so far. Recently, however, we decided to adopt the wave kind of probability functions for problems in computer science. In a paper recently published at the International Conference on Image Processing, we show how the idea works for a particular problem in computer vision.

We adapted a classical algorithm to detect circles in images, known as Hough Transform. In this method, each edge point contributes a probability distribution (a set of guesses) for the location of the center of the circle it belongs to. This distribution is constant in a line perpendicular to the edge.

When the probabilities corresponding to all edges are added, the center of the circles stands out.

This works well for simple, clean cases. But problems arise in more complex, real world images, such as this picture of a mouse embryo.

The main issue is that there are many edges not from the boundaries of cells contributing votes. Replacing the default constant votes by wave probabilities, the final sum has much less clutter, due to the cancelations that result from wave interference. The centers still stand out because the waves reaching them are “in phase.”

In the article, titled Complex-Valued Hough Transforms for Circles, we show that the modified algorithm is not only visually, but also statistically better than the classic one, as seen in experiments with hundreds of natural and synthetic images.

We believe quantum physics could potentially contribute many ideas for better algorithms in computer science, not only be regarded as an eventual means for obtaining high computing power in the form of quantum computing. Our current research, for instance, focuses on the mathematical formalism of spin and its use to detect symmetric shapes in images.