Most of us have used apps like Shazam, which can identify songs when we hold up our phone up to a speaker. But what if it was possible for an app to identify a piece of music based on nothing more than your thought patterns. Impossible? Perhaps not, according to a new piece of research carried out by investigators at the University of California, Berkeley.

In 2014, researcher Brian Pasley and colleagues used a deep-learning algorithm and brain activity, measured with electrodes, to turn a person’s thoughts into digitally synthesized speech. This was achieved by analyzing a person’s brain waves while they were speaking in order to decode the link between speech and brain activity. Read the Full Article Here