Facial Emotion Based Music Recommendation System using computer vision and machine learning techiniques

: The face is an important aspect in predicting human emotions and mood. Usually the human emotions are extracted with the use of camera. There are many applications getting developed based on detection of human emotions. Few applications of emotion detection are business notification recommendation, e-learning, mental disorder and depression detection, criminal behaviour detection etc. In this proposed system, we develop a prototype in recommendation of dynamic music recommendation system based on human emotions. Based on each human listening pattern, the songs for each emotions are trained. Integration of feature extraction and machine learning techniques, from the real face the emotion are detected and once the mood is derived from the input image, respective songs for the specific mood would be played to hold the users. In this approach, the application gets connected with human feelings thus giving a personal touch to the users. Therefore our projected system concentrate on identifying the human feelings for developing emotion based music player using computer vision and machine learning techniques. For experimental results, we use openCV for emotion detection and music recommendation


Introduction
In the advancement of music field, many inventions are happening in the music field to obtain more customers and increase the business revenue by running advertisements. Since music is connected with listener's feelings and some researchers state that music is a best solution to resolve mental disorders, sleeping problems, depressions etc Pavan, 2020). Hence there are many researchers contribute new innovations in the field of music content, like applying software technologies, mobile applications, signal processing, analytics etc.
Automatic audio genre/mood recognition, song similarity computation, audio maker detection, audio-to-score matching, query-by-singing/humming, and a few others are recent problems or difficulties in music data retrieval. Current system predicts and designs a training model on the basis of consumer hearing patterns, therefore many systems don't link human feelings to existing systems (Chinnamahammad bhasha, 2020; Balamurugan, 2020). As a result, the implementation of designing and implementing a content-based music recommendation framework that automatically detects human emotions has a broader reach. This should involve emotion detection, low feature based extraction and interface to connect music recommendation.
The human emotions is an dynamic one and keep on changing on timely manner, hence observing human emotion and storing values for classification is an important factor. Few researchers proposed "mental state detection" which states mental states such as happy, sad, anger, disgust, fear, surprise, and serene detections. All the mental states can be detected based on the training and trained datasets. Human feelings detection based on human facial emotions, speech is increasing now a days (Balamurugan, 2017; A roulanandam, 2020). Feeling detection/recognition will play a crucial role in several alternative potential applications like music diversion and human-computer interaction systems.

Related Works
This paper explains automatic face recognition system. This explains 3 sessions, 1. Face detection, 2. Feature Extraction and 3. Expression recognition. This paper briefs detection of respective face obtaining the face and perform morphological operations to obtain the feature such as eyes and mouth from the face. They proposed AAM technique for facial feature extractions like extracting eye, eyebrows, mouth, lips etc (Anagha, 2014; Bhasa, 2020).
This paper analyses and proposed Bezier curve fitting. They used for extracting the facial features from the original facial input images and also they proposed to extract region of interest from the input facial images. First the input image colour is adjusted to make it compatible for feature extraction process (ChinnamahammadBhasha, 2020). Then the feature extraction of eye and mouth are performed using region of interest technique to extract the feature points for matching. Finally application of Bezier curve on eye and mouth points, the human feelings are understand (Yong , 2013).
The theoretical idea of a music recommendation method based on mood images is proposed in this research paper. The song is selected for you depending on the genre it belongs to. This is performed manually and does not take into account individual emotions. The suggested algorithm suggests music based on the music's genre, sound, and when it meets the user's listening habits, the music list is modified and approved (ArtoLehtiniemi ,2012; ChinnamahammadBhasha, 2020).
This article explains how to build an automatic music player based on user click trends from movie music. They suggest a music suggestion pattern based on consumer behavior in this article. They used association rules to work out the correlation between emotion and song. The experimental findings indicate an accuracy of 80% on the outcome, but since human emotions change over time, complex estimation and identification of human emotion is a key feature in music recommendation systems (Abdat , 2011;Garikapati, 2020).
This research paper suggested MoodPlay, a music recommendation framework that takes into account both the user's mood and the music they are listening to. The MoodPlay investigates and suggests music dependent on the users' subjective aspects (Deepthi, 2019). The research in this paper was based on a pre-existing user profile. As a result, they used pre-existing photographs as feedback, and they made recommendations based on emotional dimensions (Renuka, 2012).
Manual segregation of a list and annotation of songs, in accordance with this emotion of a user is been proposed. Existing approach algorithms are slow and integration of extra hardware like electroencephalogram systems and sensors would be complex and provide less promising results. This paper presents an algorithmic rule that automates method of generating an audio list, supported facial expressions of a user. It additionally aims at increasing accuracy of designed system. Countenance recognition module of projected formula is valid, tested and experimented on pre-defined dataset based images (Kiruthika, 2020; Balamurugan, 2020).

Methodology
In the proposed system, we integrate computer vision and machine learning techniques for connecting facial emotion for music recommendation. For experimental results we used PyCharm tool for coding. [Fig 1] In this we consider real face input of human using webcam, then image processing techniques are performed on the input acquired image. The features from the input images are extracted using point detection algorithm. The classification algorithm OpenCV is used for training the input images for facial emotion detection. Based on the emotions detected music would be automatically played from the coding folder.

Conclusion
In this project, we propose a music recommendation system based on user emotions. The human face is given as input, from which facial emotion is detected and based on the emotions music is played automatically. For feature extraction we used point detection algorithm and for machine learning training we used OpenCV for promising results. Thus our proposed system provides good level of accuracy on real face images.