Implementation of SVM with SMO for Identifying Speech Emotions using FFT and Source Features
Main Article Content
Abstract
For speaker independent emotion recognition, scholars have recently proposed Fourier parameter and mean Fourier parameter models. This study aims to propose fast Fourier coefficient features such as minimum, maximum, mean, and standard deviation. In addition to source features, i.e., glottal volume velocity, this study aims to separate speech segments into voiced speech segments, unvoiced speech segments, and silent segments so that the effect of each part of speech corpus on emotion recognition can be observed. Experimental results indicate that the proposed method improves speech emotion recognition rate to 80.85% for EmoDB. For Ryerson audio-visual database of emotional speech and song (RAVDESS) for eight emotions, a highest emotion recognition rate of 70.19% was achieved.
Downloads
Metrics
Article Details
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.