A Transfer Learning-based Approach for Multimodal Emotion Recognition

Main Article Content

Ayushi Jain

Abstract

The topic of multimodal emotion recognition is one that is expanding at a rapid rate. The goal of this field is to identify and comprehend human emotions through the use of many modalities, such as speech, facial expressions, and physiological data. Transfer learning strategies have been found to be successful in overcoming the issues of processing and integrating material from a variety of modalities, as demonstrated by the findings of a number of studies. For testing multimodal emotion detection models, it is helpful to make use of publicly accessible datasets like IEMOCAP, EmoReact, and AffectNet. They provide useful resources. Data variability, data quality, modality integration, limited labelled data, privacy and ethical issues, and interpretability are only few of the hurdles that must be overcome in order to construct accurate and effective models. In order to address these challenges, a multidisciplinary approach must be taken, and research must continue to be conducted in this area. The goal of this research is to develop more robust and accurate models for multimodal emotion recognition that can be applied across a variety of contexts and populations.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
Jain, A. . (2020). A Transfer Learning-based Approach for Multimodal Emotion Recognition. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 11(3), 2001–2009. https://doi.org/10.17762/turcomat.v11i3.13597
Section
Research Articles