Main Article Content
For speaker independent emotion recognition, scholars have recently proposed Fourier parameter and mean Fourier parameter models. This study aims to propose fast Fourier coefficient features such as minimum, maximum, mean, and standard deviation. In addition to source features, i.e., glottal volume velocity, this study aims to separate speech segments into voiced speech segments, unvoiced speech segments, and silent segments so that the effect of each part of speech corpus on emotion recognition can be observed. Experimental results indicate that the proposed method improves speech emotion recognition rate to 80.85% for EmoDB. For Ryerson audio-visual database of emotional speech and song (RAVDESS) for eight emotions, a highest emotion recognition rate of 70.19% was achieved.