Cool!

Published on July 1st, 2015 | by Alan Cross

0

Having Trouble Classifying the Songs in Your Music Library? Then Read This

In order to make sense the world, humans like to put things in neat little piles. That includes music. But if you’ve ever ripped music to iTunes, you might (as I have) run into all kinds of quandaries when it comes filling in the metadata field for “genre.” Is this rock or alternative? If it’s alternative, is “indie” a better descriptor?

And the more you try to parse things, the more difficult making these little piles become. Late last year, I posted a link to a guy who identified 1,264 micro-genres of music. That didn’t help.

The good news is that help seems to be on the way. A new computer system created by a group at the Neotia Institute of Technology Management and Science (aka NITMAS) in West Bengal, India, looks at each song in terms of tempo, pitch, volume dynamics and phrase repetition to determine the genre of any given track.

Pitch analysis–that is, taking a hard look at the melody–is the most important part of the program. The system looks at 88 different frequency bands in order to “calculate the short-time mean-square power (a kind of measure related to the sound wave’s voltage and current), both individually and as an average.” I have no idea what that means. I just quoted from the Gizmag article. Here’s more:

For tempo, it starts with a novelty curve, which follows changes in the song’s timbre, or tone color – so, basically when the instrumentation changes. It then performs a Fourier transform, which deconstructs the song’s sound wave into many sine curves, each corresponding to a different frequency, which can be further analyzed to get the beats per minute.

For amplitude variation patterns, the signal is smoothed and then mathematical matrix operations are performed on it to get the equivalent of the signal’s texture. While for periodicity, it divides the signal into frames of 100 samples each and calculates cross correlations between them. The system then takes the maximum cross correlation of each frame and uses it to calculate mean and standard deviations.

All of this information gets fed into a classification scheme. The researchers tested their method with three classifiers – multilayer perceptron (MLP), which is an artificial neural network that consists of multiple layers of neuron-like things called perceptrons; support vector machines (SVMs), which use machine learning and a set of training data; and random sample consensus (RANSAC), which makes a hypothesis based on a randomly-selected sample set and then verifies it against the model (and iterates through the data, taking the best fit estimate as the final one).

RANSAC outperformed the other two classifiers in both feature sets developed from a database of 490 songs in seven different genres. And the methodology used also proved more accurate – or in their words, “substantially better” – than different approaches used in previous studies, when tested on the same data.

Um, okay.

Bottom line, though, is that these researchers think they can classify songs better than anyone else and thus give more power to music recommendation systems.  Read the entire article here.

 




Tags: , , ,


About the Author

is an internationally known broadcaster, interviewer, writer, consultant, blogger and speaker. In his 30+ years in the music business, Alan has interviewed the biggest names in rock, from David Bowie and U2 to Pearl Jam and the Foo Fighters. He’s also known as a musicologist and documentarian through programs like The Ongoing History of New Music.


Related Posts


Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top ↑