The goal was to design a complete fingerprint of a song: to reduce music to data to better guide consumers to songs they would enjoy.Įventually, listeners may start to resemble the models streaming platforms have createdīy the time Spotify bought the Echo Nest, it claimed to have analyzed more than 35 million songs, using a trillion data points. To round out their models, the algorithms could also scour the internet for and semantically analyze anything written about a given piece of music.
![yesterday once more yesterday once more](https://i.ytimg.com/vi/f-VWVlvxlaw/hqdefault.jpg)
Founded in the MIT Media Lab in 2005, the Echo Nest developed algorithms that could measure recorded music using a set of parameters similar to Serrà’s, including ones with clunky names like acousticness, danceability, instrumentalness, and speechiness. In the early 2010s, the leading music-intelligence company was the Echo Nest, which Spotify acquired in 2014. These findings marked a watershed moment for the music discovery industry, a billion-dollar endeavor to generate descriptive metadata of songs using artificial intelligence so that algorithms can recommend them to listeners. This convergence suggested that there was an underlying quality of consumability that pop music was gravitating toward: a formula for musical virality.
![yesterday once more yesterday once more](https://i.ytimg.com/vi/2yo-1x3OkIE/maxresdefault.jpg)
![yesterday once more yesterday once more](https://images.genius.com/d1c3f38f85bc09ec7a1db4631f4c7062.889x889x1.jpg)
Timbral variety in pop music had been decreasing since the 1960s, the team found, after using computer analytics to break down nearly half a million recorded songs by loudness, pitch, and timbre, among other variables. In 2012, Joan Serrà and a team of scientists at the Artificial Intelligence Research Institute of the Spanish National Research Council confirmed something that many had come to suspect: that music was becoming increasingly the same.