By: Nghi Le
Hi everyone, my name is Nghi Le. I’m a Data Science major with a minor in Music. I have a deep passion for both Math and Music, and my capstone project is a unique blend of these two fields.
Abstract
Music is widely recognized as highly beneficial to human beings, particularly in relation to human emotions. On one hand, music serves as a medium for people to express their feelings through composing or performing. On the other hand, listening to music can evoke emotions, either by modulating or enhancing a person’s emotional state. Regardless of the approach, listening to the right music can contribute significantly to the emotional well-being of humans. Despite that fact, current states in the art of Music Recommendation Systems (MRSs) are still under-researched when it comes to emotion-based music recommendation models. Therefore, this research focuses on developing an emotion-aware song recommender by launching a webpage, asking user to input a song they want to listen to or a song currently on listen rotation, then recognizing users’ real time emotions with the help of facial emotion recognition technology CNN and lastly recommend songs by matching obtained emotions to the Music Emotion Recognition (MER) tags.
Introduction
Although research Music Recommendation Systems has been gaining substantial interest in both academia and industry, emotion-based MRSs are still underdeveloped in recent years. This comes from the fact that current MRSs typically focus on user-item interaction and sometimes content-based descriptor, neglecting factors that significantly affect listener musical tastes and needs such as personality and emotional states, due to psychological complexities in human emotion. \cite{schedl2018current}. As a result, current MRSs often yield unsatisfactory recommendations. To build a stronger and more personalized music recommendation system, I propose an approach that takes listener emotional states into account.