The Mathematical Guitar

According to the New Oxford American Dictionary, the [acoustic] guitar is “a stringed musical instrument with a fretted fingerboard, typically incurved sides, and six or twelve strings, played by plucking or strumming with the fingers or a plectrum”. The same reference describes the electric guitar as being “a guitar with a built-in pickup or pickups that convert sound vibrations into electrical signals for amplification”.

There are many types of acoustic guitars (like the folk, the twelve-string and the jazz guitars) as well as of electric guitars (where, besides the shape of the body, the type and location of the pickups are key features). The strings can be made of nylon or of steel, and many distinct tunings exist. It is also large the number of parts that make up the instrument, especially in the electric version.

Physically speaking, the acoustic guitar is a system of coupled vibrators: sound is produced by the strings and radiated by the guitar body. In electric guitars, the vibrations of the body are not of primary importance: string vibrations are captured by pickups and “radiated” by external amplifiers. Pickups can be electromagnetic or piezoelectric. Piezoelectric pickups are also common in acoustic guitars, eliminating the need of microphones, although microphones capture better the “acoustic” nature of the sound produced.

In summary, a guitar is a six-strings musical instrument, acoustic or electric, with the default tuning 82.41, 110.00, 146.83, 196.00, 246.94 and 329.63 Hz, from the top to the bottom string, according to the point of view of the audience. The mentioned fundamental frequencies correspond to MIDI notes E2, A2, D3, G3, B3 and E4, respectively. It is also supposed that frets are separated in the fretboard according to the equally-tempered scale.

Continue reading [link to PDF, 1.3 MB]


from times of deep darkness
fear, hunger, pain
we arise

from ages of plain tyranny
of miserable, selfish gods
we survive

testing mother nature
crafting math, literature
we describe

combining what makes matter
as far as to scratch the heavens
we arrive

but doubt lurks around
and hope, hits the ground
when lies are told aloud

by men who drink from myths
that fit no one and all
when cherry-picking rules

certainty is not truth
comfort is not truth
confidence’s not truth

but as such they are sold
four our souls
which we don’t know
for sure that exist

Impressive, Autopilot, But No Thanks

It wasn’t too long after exercise bicycles were invented that people started noticing the irony of those who drive to the gym to run on a stationary bike. In modern days of automation and mindful meditation, the same folly applies to the office worker that gets stressed developing an algorithm to execute a boring task and save some time, only to later spend time mindfully executing some boring task to relax. By current expectations on the future of mobility, soon that office worker will automate a boring task in a car that self-drives to the gym, run on a stationary bike, then perform another boring task mindfully to relax, while the car self-drives back home.

Having a car that drives itself seems at first a good idea. Perhaps we could use the time to learn a new language, work on our startup, catch up with friends on social media, or take a nap. But here I want to argue for the default option: that we should be driving our cars, even if not required to.

To be fair, it’s not easy. On the practical side, traffic is chaotic, other drivers are aggressive, roads are badly maintained, and the commute is long. In terms of productivity and time management, given the pressures and anxieties of work, indeed often times we could use elsewhere the minutes we spend driving. A third major factor impacting the experience on the road is design: most popular cars are simply boring to drive — the few people who appear to be excited about their automobiles are those who can afford sleek, sporty models from a handful of Italian manufacturers.

Now Silicon Valley is determined to solve the problem of getting us some extra time, via automation, and perhaps by following Steve Jobs’ mantra that consumers don’t know what they want until it’s shown to them, it seems we collective agreed that self-driving is really a goal we should be spending a lot of brain power on right now — as if it would somehow solve one of the most pressing issues of our existence.

It will not.

While it would certainly be better to have the option of not driving, resources are scarce, and we do have to chose priorities. Just to stay on the same industry, how about bringing the development of engines based on renewable energy to the top of the agenda instead? Tesla should be making headlines for its electric engine, not for its autopilot software.

Entrepreneurs know that consumers are much more susceptible to arguments that benefit them personally (the ideal safety of an autopilot) than something vague and intangible (the environment). There are, however, major psychological benefits to be gained by driving, which we can tap into with just a bit of mindfulness, and self-control over road rage.

First, recognition and contemplation of our good luck. If we zoom out a little on the line of history, we see the miracle of science and engineering that is involved in pushing us across the surface of the Earth faster than any other human has done up until a mere century ago. While we still have to keep our hands at a circular interface in order to tell the machine exactly where to go, our ancestors had to pull ropes attached to various domesticated animals — none of which had air conditioning, cup holders, or GPS.

Second, if in the name of productivity we eliminate every possible chunk of idle time, when will we give opportunity to epiphanies to happen? The so called “shower thoughts” — which occur when we are relaxed, performing some rather boring task almost automatically— are some of the best sources of good ideas, provided we serendipitously pay attention. Obviously, one cannot be too relaxed on the road, but even this is good: the need to keep constant focus on a methodical task is a great way to tame a restless mind — precisely what mindfulness advocates argue for.

Third, in the chaos of modern life, when often we are made aware of how hard it is to have things happening as we’d like them to, the sense of control over a machine much heavier, stronger than us, brings a sense of power and confidence that we rarely encounter.

We do need cars that guard us from eventual mistakes, and certain people — for example, the elderly and disabled — would potentially benefit from full automation. But Silicon Valley and the auto industry are trying to tell us with a certain imperative attitude that we all need self-driving cars. We really don’t. What we need instead is to rethink driving, to be conscious of, and leverage the numerous therapeutic opportunities that it provides.

On Knowledge and Confidence

Confucius: “Real knowledge is to know the extent of one’s ignorance.”

Bertrand Russell: “One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision.”

Charles Darwin: “Ignorance more frequently begets confidence than does knowledge.”

The Political-News Glass

Too much is said (or rather, boasted with an air of superiority) about political subjects, and probably the best everyone could do would be to shut up a little. In this spirit, I shall keep it short, while presenting two useful lenses through which to read the political news.

Lens 1: On Bullshit, a 2005 philosophical essay by Harry G. Frankfurt.

From the Wikipedia article: “bullshit can be neither true nor false; hence, the bullshitter is someone whose principal aim — when uttering or publishing bullshit — is to impress the listener and the reader with words that communicate an impression that something is being or has been done, words that are neither true nor false, and so obscure the facts of the matter being discussed.”

Lens 2: Dunning-Kruger Effect, a cognitive bias first described in 1999.

From the Wikipedia article: “The Dunning–Kruger effect is a cognitive bias in which relatively unskilled persons suffer illusory superiority, mistakenly assessing their ability to be much higher than it really is. Dunning and Kruger attributed this bias to a metacognitive inability of the unskilled to recognize their own ineptitude and evaluate their own ability accurately.”

On Time Management

Past: drive to the gym to run on stationary bike.

Present: get stressed developing automated system to execute a boring task and save time, then spend time mindfully executing some boring task to relax.

Future: automate boring task in car that self-drives to gym, run on stationary bike, then perform some boring task mindfully to relax while car self-drives back home.

On Singularities

A computer, once again, outperformed a human in a highly specific task, this time around the game of Go, using, in part, a recent (well, not that recent, but also not “traditional”) AI technique known as Deep Learning.

The media, once again, made a splash, and some critics were quick to dismiss the feat by pointing to the limitations of Deep Learning (the wining algorithm also used more traditional AI methods).

Of course, the people on the forefront of Deep Learning know better than anyone about its limitations — they’re simply more faithful in it than others. Deep down (no pun intended), they probably don’t like such splashy news either, because it raises expectations, but we all understand the importance of advertisement (we live in a social, political world).

If history is of any guidance, the current hype will pass, as have many other AI hypes. It is not impossible that general AI will happen. Singularities do happen: this universe, self-replication, self-consciousness. But they seem to occur only every billion years or so.

Hence, in the big scheme of things, the last singularity happened just “yesterday,” and we will have to wait a whole lot for the next. Current AI progresses are admirable, and important, but as a society, we have to learn to look at them for what they really are: incremental steps.

Related: On the Higgs Boson Hysteria

TensorFlow 101

There’s a sort of “gold rush” between Machine Learning toolkits to grab the attention of developers. Caffe, Torch, Theano, and now TensorFlow, are only some of the competitors. Which one to choose?

Hard to know for sure. There are the usuall technical trade-offs (see, but for the user, besides technical capabilities, often times the choice comes down to which one has the best documentation (i.e., which one is easier to use).

So far, given the power of it’s sponsor, TensorFlow seems to be the one with a more serious approach to documentation. Still, the MNIST and CNN tutorials could be simpler.

Introducing: TensorFlow 101.

This project has two main files ( and, and two sample datasets (subsets of the MNIST and CIFAR10 databases). The Python routines are modified from the “MNIST For ML Beginners” and “Deep MNIST for Experts” (from

The main diference is that and contain code to read and build train/test sets from regular image files, and therefore can be more easily deployed to other databases (which, ultimately, is the goal of the user). Notice that the folders MNIST and CIFAR10 are organized in subfolders Train and Test, and in these subfolders each class has a separate folder (0, 1, etc).

Therefore, one simple way to deploy and to your own custom database is to organize your database in the same hierarchy as MNIST and CIFAR10 in this project, and modify the variable “path” on the .py routines to point to your dataset. Notice that your dataset doesn’t have to have 10 classes; however, all images in the provided sample datasets are grayscale and have size 28×28, hence non-trivial modifications to the code should be performed in order to deal with other types of images.

See also: Udacity Deep Learning Course.