The things you learn from students. One of mine was saying how difficult it is to use iTunes to find music choices with similar characteristics. Before you say ‘whaddya mean, difficult?!’, remember that ‘genre’ and ‘style’ and ‘mood’ are very broad categories. What she meant was that let’s say that for an exercise you wanted to try out different kinds of things in three, the best iTunes will manage is to trot out more waltzes, and if your musical knowledge is up to it, other things exhibiting three-ness that you type in – as long as you know what they might be.
What it won’t do is pull out things at a similar tempo, style, metre, key. The waltz is a good example: after many years, I discovered that dance teachers would prefer a ‘mazurka’ of the ballroom type 9 times out of 10 when they say ‘waltz’, but the ballroom mazurka faded from anyone’s tagging mechanisms long, long ago. There are waltz-like things that aren’t waltzes, and waltzes that aren’t waltz-like. On the basis of metre, tempo and syncopation, Baroque music has much in common with jazz and other non-classical forms, but it’s not until you forcibly remove the social, cultural and personal connotations that you begin to see the resonance. And music in dance teaching is a funny mixture of ‘I’ll have that because I love it, it makes me want to dance and it makes me feel, ooooh so gorgeous, pink and sparkly’ and ‘I’ll have that because it’s in 9/8. It’s ghastly, but it’s suitable.’
Anyway, back to the iTunes issue. I muttered something to the student about future developments in music datamining, and determined to look out for such a tool. It was only a while before I found ‘musical brain’ software from The Echo Nest, developed by PhD students in music synthesis and understanding at MIT. This does exactly what iTunes and a human brain can’t: it quickly analyses musical files for the kind of information that links pieces by their auditory characteristics, rather than just by style and genre and other matching algorithms. The Analyze API examines things like metre, key, harmony etc. which enables more useful comparisons and matching with other music files.
So what? Well, go to thisismyjam.com that uses this API and try it out, to see the result. Pick a few songs that you know are roughly similar, drag them into a playlist, and listen to the way that the API makes a ‘best fit’ of your selections, matching key, harmony, tempo, beat etc. so that the tracks melt into each other on the basis of shared characteristics, a kind of enharmonicism by multiple features.