“I am Sorry I Haven’t A Clue” is a BBC radio comedy panel game. Sometimes, the panelists are asked to sing the lyrics of one song, to the melody of the other. This game often results in funny, sometimes weirdly confusing music. Even though you’d expect a song made by putting together components of two different songs to be a mess, it is anything but.
The words of one song seem to be in harmony with the tune of the other; however, they do still seem to portray two disjoint sentiments. The game is fun because the audience has to spot and recognize the disconnect between the expressiveness of the music and the lyrics.
We all know that for a tune to be good, not only should it flow harmoniously with the words, it should also depict something that is attuned to their meaning. It’s not difficult to try to decode the meaning of a song’s lyrics, by understanding the words and sentences, but how can you understand music, if at all?
Before we try to understand the meaning of music, we need to answer the following question: Is there any meaning associated with music in the first place? Lyrics are easy to attach meaning to because they are easily processible, but what about music? Is there a way to accurately define what certain music is trying to elucidate?
For example, what is the music of the song Lately by Lera Lynn trying to convey? Or does the music of The Reason by Hoobastank tell us anything about whether or not the song is a happy one or a sad one? The researcher believes that once we gather enough data, we can adequately answer these questions with some level of scientific certainty. We might be able to program a machine that can listen and feel the music of songs in the same way our ears do; perhaps even better.
A new research paper published in the Royal Society, Open Science journal, tries to solve this problem by studying the connection between the sentiments of the lyrics and the musical components they are sung in tune with.
The researchers have tried to employ good old math to solve the issue, which has yielded results that are still far from convincing. They have identified a secret chord type that has historically been associated with “happy” songs. Is this really something we can go with, or is it a gross oversimplification of how music works? Regardless, it would be tough to put together a machine that can continuously give concrete results in this regard.
To perform researches of this nature, scientists need loads of data, and these group of scholars knew that. They fetched data from three huge public sources; two of which weren’t initially created for this purpose. Around 90,000 songs were downloaded from Ultimate Guitar, a website where people upload their own musical transcriptions.
To associate some sort of emotions to the lyrics of the song, the scientists turned to labMT, which is a crowdfunded website which gives “emotional valence” ratings to words, which basically means the extent to which a word is good or bad. Lastly, data related to the origin of the songs was taken from Gracenote.
The researchers tried to correlate the valence of the words with the accompanying chord. They had to do this repeatedly, after continuously trying to refine data in the bid to make it cleaner and devoid of outliers.
By running the tests on many different datasets, the authors were able to confirm that the major chords were related more with positive words as compared to minor chords. They had to use some of the highest quality analytical tools to achieve the results.
The results weren’t exactly what they were anticipating. They found out that the 7th chords, the ones that have 4 varying notes instead of the common three, had the highest degree of association with the positive words.
Moreover, they pointed out that even the minor 7th chords had this high level of association. Could this mean that the 7th chord is, in fact, the happy chord? Could this mean that to tell a happy song from a sad one, we could simply try to look for the 7th chord?
Many scientists before this group of authors had found the valence of the 7th chords to be somewhere between minor and major. As the results of many credible research papers were in conflict with the findings of the authors, they had to verify their results multiple times.
However, every time they ran the tests, the results were the same. Anyway, quantitative research like this are becoming more prevalent in the field of music and emotion these days. As the number of available data increases, so do the chances of soulless machines entering the unchartered territories of human creativity.
The question we need to ask here is whether these advancements are incredible, or frightening? Should we be amazed by the fact that we are trying to explain art using numbers? Something that used to be a highlighting difference between art and science?
Or should we be scared that machines are entering realms that they shouldn’t be? What’s next? Machines taking our music and composing music for us? Or using mechanical judges for rating songs and operas?
We don’t know whether we should be amazed or afraid, but we do know that there is no point in being afraid. Machines like those mentioned above are already among us. For example, consider Songsmith by Microsoft. It can generate a tune right after you record a voiceover.
The real possibility that we should be afraid of is the human misuse of such machines. The art of music has evolved over centuries, and if the last few decades of technical attachment are anything to go by, we might entirely forego everything about music theory we have learned over the years, for sleek data science applications.
Even though all of the authors of this paper hail from a university which has the largest music schools in the US, they weren’t music majors; instead, they were from the Department of Informatics.
Even though they did get insight from a lot of music school teachers and students, the crux of their work was based on advanced statistical analysis, which in no way equates to the depth of traditional music theory.
Seventh chords can’t be used interchangeably with minor and major chords. Their function is specific, and they can occur at different places within a phrase. The authors think that their technique to establish a connection between words and their emotional content is novel, but the same was explained by Derek Cooke in the 1959 book, The Language of Music.
While sometimes words can carry the real, underlying meaning of a song, it’s not always the case. Sometimes the music can tell a different story if we listen carefully, and sometimes the words and the music combine to tell a story that nobody but the author is privy to.