This paper is published in Volume-4, Issue-2, 2018
Area
Machine Learning
Author
Rohan Raphy Thattil, Joseph Gigo Ignatiaus, Shafeer P. N, Shyam Krishna M
Org/Univ
Sahrdaya College of Engineering and Technology, Kodakara, Kerala, India
Pub. Date
07 April, 2018
Paper ID
V4I2-1677
Publisher
Keywords
CK+ Dataset, Emotion Recognition, Feature Extraction, MFCC, SAVEE Dataset, SVM.

Citationsacebook

IEEE
Rohan Raphy Thattil, Joseph Gigo Ignatiaus, Shafeer P. N, Shyam Krishna M. Emotion recognition using face and voice analysis, International Journal of Advance Research, Ideas and Innovations in Technology, www.IJARIIT.com.

APA
Rohan Raphy Thattil, Joseph Gigo Ignatiaus, Shafeer P. N, Shyam Krishna M (2018). Emotion recognition using face and voice analysis. International Journal of Advance Research, Ideas and Innovations in Technology, 4(2) www.IJARIIT.com.

MLA
Rohan Raphy Thattil, Joseph Gigo Ignatiaus, Shafeer P. N, Shyam Krishna M. "Emotion recognition using face and voice analysis." International Journal of Advance Research, Ideas and Innovations in Technology 4.2 (2018). www.IJARIIT.com.

Abstract

The goal of this project is to design a system that reads the face and voice of a person in conjunction to detect the sentimental and emotional state of a person based on that data. Humans are said to express almost 50% of what they want to convey in non-verbal cues. This concept can be used to analyze on the tone and facial expressions both of the non-verbal cues that can be used to detect the sentimental and emotional state of a person based on his facial expressions and speech features. Here make use preexisting databases, classify them according to their emotions and create classifiers for the purpose of emotion recognition of new input data. There are many data sets available from previous surveys. The database we use in here is Extended Cohn Kanade database for the facial expressions and SAVEE database for speech data. We then make use of the SVM classifier to classify the respective emotional labels appropriate for each of the image and then create an SVM classifier which is able to classify the input given to it. The emotional state of the face and the voice of the user is found out and then we read them both in conjunction to get a more accurate representation of the user’s emotional state. We then play an appropriate music depending on the emotion of the user.