Music always plays a huge part in our life. From the lullabies
we hear as children, the music we listen to as teenagers, and the songs we use
for special events. It’s impossible not to hear music and not feel something at
the same time. So, the question is, why is it that some music makes us happy?
Why is it that some make us sad? Why do we even listen to sad music if it makes
us sad? And why do we need to answer these questions anyway?
But before
anything else, researchers have debated amongst themselves on the definition of
emotions. (Hunter and Schellenberg, 2010) What exactly constitutes emotions? For the layman,
emotions are the usual: happy, sad, angry, disgust, excitement, fear, etc.
However, when taken into research terms, it becomes much more complicated than
that. For example, when we listen to music, are we actually feeling emotions
that we feel in everyday life or is music just inducing temporary feelings that
are different from the emotions we feel everyday?
Another
matter of fact is, how exactly does music affect emotions? Psychologists have
to separate emotions, feelings, and moods. While there are various theories
chosen to explain how these came to be, it has been mostly agreed upon that
music expresses emotion and these things affect us. Researchers argue whether
the emotions we feel are real emotions, aesthetic emotions, or just moods
induced by the music. It’s also been debated whether we just perceive the emotion
expressed by the music or if we’re really feeling this emotion. (Hunter and Schellenberg, 2010)
There have
been various studies debating on the many properties of music and emotions
(citation), but if there’s one thing that all the researches have agreed on,
it’s that certain chords are associated with certain emotions. Major chords are
associated with happiness, minor chords are associated with sadness, and
dissonance chords are associated with unpleasantness (Scherer, 2004). An example
would be playing a song in either minor or major chords and seeing the
difference of the emotions evoked in the chords played.
It’s
incredible how just changing the set of chords can make us feel very different
things. However, it’s not just the chords that could change how music can
portray emotions. Other musical characteristics can actually also change how
music can feel. For example, tempo and
mode are the strongest determinants of emotion in music in a series of studies
created before (citation). Fast tempos and major modes are associated with
happiness and slow tempos and minor modes are associated with sadness. However,
even as music can make us feel emotions, the most consistently judged emotions
are happiness and sadness, compared to other emotions like anger and fear.
Some
evidence has also been found that timbre affects emotions in music. Soft
timbres attenuated with high frequencies are associated with sadness, where as
sharp timbres are associated with anger. Loudness of the sound has also been
found to be a universal cue to anger according to a cross-cultural study by
Balkwill and coworkers (2004)
Although
while it has been found that music portrays emotion, interestingly enough,
children have difficulty identifying the emotions in music, at least until they
reach the age of 11 years old, where they reach the adult-like accuracy (Hunter
et. al, 2008)
So why
should we learn about emotions and music and why they’re related? Music is
always a huge part of our lives. The music industry wants to sell more, wants
people to buy their product. Obviously, if you’re a customer and you realize
you like a product, the next question you usually ask is:
Is there anything else
like this?
And so,
these connections of music and emotions have been used in the music industry in
order to recommend songs or group playlists according to the emotions that each
one portrays. Using a complicated formula, theories, and models that predict
the type of emotion music portrays and recommends music according to the
similarity of the songs. This includes Spotify, MusicCat, and other music
sharing programs (Han, B. J., Rho, S., Jun, S., & Hwang, E.,2010 ;Shan, M. K., Kuo, F. F., Chiang, M. F., & Lee, S. Y., 2009)!
So there
you have it, music’s algorithm and its application in real life.
REFERENCES:
Balkwill L-L, Thompson WF, Matsunaga R (2004) Recognition of emotion in Japanese Western
and Hindustani music by Japanese listeners. Jpn Psychol Res 46:337–349.
Han, B. J., Rho, S., Jun, S., & Hwang, E. (2010). Music emotion classification and context-based music recommendation. Multimedia Tools and Applications,47(3), 433-460.Hunter, P. G., & Schellenberg, E. G. (2010). Music and emotion. In Music perception (pp. 129-164). Springer New York.Pallesen, K. J., Brattico, E., Bailey, C., Korvenoja, A., Koivisto, J., Gjedde, A., & Carlson, S. (2005). Emotion processing of major, minor, and dissonant chords.Annals of the New York Academy of Sciences, 1060(1), 450-453.Shan, M. K., Kuo, F. F., Chiang, M. F., & Lee, S. Y. (2009). Emotion-based music recommendation by affinity discovery from film music. Expert Systems with Applications, 36(4), 7666-7674.Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them?. Journal of new music research, 33(3), 239-251.
Han, B. J., Rho, S., Jun, S., & Hwang, E. (2010). Music emotion classification and context-based music recommendation. Multimedia Tools and Applications,47(3), 433-460.Hunter, P. G., & Schellenberg, E. G. (2010). Music and emotion. In Music perception (pp. 129-164). Springer New York.Pallesen, K. J., Brattico, E., Bailey, C., Korvenoja, A., Koivisto, J., Gjedde, A., & Carlson, S. (2005). Emotion processing of major, minor, and dissonant chords.Annals of the New York Academy of Sciences, 1060(1), 450-453.Shan, M. K., Kuo, F. F., Chiang, M. F., & Lee, S. Y. (2009). Emotion-based music recommendation by affinity discovery from film music. Expert Systems with Applications, 36(4), 7666-7674.Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them?. Journal of new music research, 33(3), 239-251.
No comments:
Post a Comment