Implementation and Analysis of Sentimental Analysis on Facial Expression Using HAAR Cascade Methods

The sentimental analysis is phenomenon of exploring, analyzing and organizing human feelings. It is a process of extracting feelings of human faro pictures. It involves the separation of image into various characters such as face, background, etc. It uses lips and eye shape for extracting human feelings. It uses numerous of applications such as Pycharm Numpy ,Open CV, Python,etc. Its main objective is to find out the moods of human such as happy , sad ,etc. This report generates the emotional state of human being as well as different emotion of human in different situation.


Introduction
Image processing is a phenomenon of digital image which uses digital camera t(x)1 for extracting information [1]. A digital image [2], [3]is a collection of elements. Each element has its own value at different position or places.They are called as pixels. The complete process of sentimental analysis is given infigure-1. In Figure-1 we see that Sentimental process which comes under face. Detection, feature extraction and face recognition. Facial sentiment analysis [4], [5]is being used a lot these days since it provides a natural and efficient way to communicate between humans. Understanding human look has many aspects like from information processing system analysis, lie detectors, emotion recognition. Some other applications related to face and its sentiments are personal identification and access control teleconferencing forensic applications movies, human-computer interaction automated surveillance etc. In expression recognition, we are extracting the features on three different stages as mentioned in the figure. In this case. machine uses the input as image. and detects the face of the person [6], [7]. This face is used for feature extraction and these. features are being processed. This processed image used for face recognition based on the images is stored in the data set. Facial recognition is software which extract human facial feature using mathematical approach to store date and face print of human face [8]. The software uses different techniques to differentiate a live capture or digital image and find out the face print by which identification of human face done. In human face software identifies approx 80 node points.

Feature Extraction Techniques:-
Every unique human face shares some properties that are same. These commonalities might be utilized using Hear Features. Each human have some common properties: • The upper cheeks are brighter than eyecolor.
• Area and Size: eye shape, mouth shape and noseextension. A Large sets of input data is given to different algorithm to find out the features of human faces. Sometimes data is too large to extract, in that case data is reduced to produce features of human face.

Face Recognition
In Facial recognition an individual face is compared to the live capture. Facial recognition are firstly used for security purposes, but now it is used in different applications for different purposes. In this report we have recognize the faces for three different things. These are recognizing the face of a person based on data available in the data sets on different types of images. It can be jpg, jpeg, png and etc [9], [10]. and recognize that detect the features. Same times we have been implemented on a pre-captured video or short movies. We have also tried extract the feature from the live camera captured images also. Current trends also show it more future scope and too much new work has to be done at all level of it. Every big company to put their hand in it to do more and in this new technology. Large number of research is being going on to get new more. Even government agencies are taking much more interest in it. It is assumed that is a future technology and we can succeed if and only if-we can understand as much as possible. Many research papers are being published each year all over the world related to it [11], [12].

Literature Survey
There are few models exists in the open market and out of all these no one properly explains the scent of the human face based on their expression appearing on the face of h until being such as Feature-Based Approaches FAU-Based Facial Expression Recognition [13]- [15] in which research made regarding that facial expression recognition can be distinguished in two main directions whichare featured based and template based. The rebased model uses the geometrical information as a feature extraction whereas template-based model uses 2D and 3D head ,facial models as template for expression information extraction whereas features based approach uses features of the face detection and informationextraction [16]- [18].
Facial feature detection and tracking is based on active Infrared illumination in order to provide visual information under variable lightening and heat motion. The classification is performed using a Dynamic Bayesian Network (DBNAA method for static and dynamic segmentation and classification of facial exprelin is proposed for static and DBN organize as a tree like structure but for dynamic approach n-0ti-level approaches are used. The system proposed in automatically detects frontal lapin the video streams and classifies them. Facial expression images are coded using a multi orientation, multi tier solution set of aligned approximately with the face. The similarly space derived from this facial image representation is compared with one derived from semantic ratings of the images by human observers. The classification is performed by comparing the produced similarity spaces. A Neural Network (NN) [19]is being designed to perform facial expression recognition. The features used can be either the geometric positions of a set of points on a face or a set of multi scale and multi orientation Gabor wavelet coefficients extracted from the image of face at the points. The recognition is performed based on two layer neural network [20]. The system developed is robust to face location change and scale variations. Feature extraction and facial expression classification are being performed using neuron groups, which having a input feature map and properly adjusting weights of the neurons for correct classification as per dataset.

PROPOSEDSYSTEM
The proposed Face detection and image recognition system has been developed which is itself divided into 3 modules, first one is face detection while the second one sentimental analysis and third (me is Graphical User Interface. We detect real time faces and then we will interpret different facial expressions or sentiments like:

IMPLEMENTATION AND WORKING
The implementation of the project is done on the following system design. The implementation in step by st epprocess: 1. First the gathering of the training data of the different emotions such as happy, sad, etc. They should be pre-processed to give accurateresults.
2. The training of the collected dataset to be done to create a predictive model that will predict different emotions according to thetraining.

5.
HAAR Cascade uses eye xml method for eyedetection.

6.
HAAR Cascade uses frontal face xml method for facedetection.

7.
HAAR Cascade uses mouth xml method for mouthdetection.

RESULTS
Nowadays, sentimental analysis is a hot topic. As our project's main objective was the the implementation of sentimental analysis on various use cases. We have come to the conclusion that the system we are using to determine the sentiments of the targeted object is executing successfully but working in different modules. The bottleneck of the process is our hardware that cannot be used to trai8n the large number of images because of its high demand of processors, slow performance on the processor and not availability of adequate graphics processing unit (GPU).
The identification module can generate data-set and be trained on its own to identify any character with adequate accuracy. The detection module detects the emotion of the characters with adequate accuracy due to its bottleneck. But both of them executes well on their own.
For determining emotion or the present state of the character only expression of face will not provide the accurate result but we have to also include the voice patterns, its pitch and amplitude,the language used and the tone of the speaker with the context. For determining these it needs the ability of speech analysis, natural language processing (NLP) and may other deep learning fields. This also needs high end server hardware for its implementation. This model will be accuracy driven the more it gets train the more accurate it will become.

Conclusion
In this project we have done sentimental analysis using. facial recognition on human face. Sentimental analysis has varying applications like security, to know about the mindset of an employee, mindset or state of patients and for the investigative purposes. It can be used in different types of examination purposes. We have used Haar Cascades for classification and feature extraction. It has wide range of canvas for .coming days. After implementation, we are able to extract two sentiments features (`Happy' and 'Sad') from human faces and in future other feature also can be extracted.
The demand for the data of consumer's sentiments has grown for the brands to market the data and grow their brand; having such access to the data will give the brands the upper hand in the marketing and connecting more effectively to the consumers. This is not only limited to the brand building but it can be used in replacing most of the legacy systems for web video interview where interviewer will have more data at his hand for the assessment. Sentimental analysis has very vast implementations we can't possibly think of all the use cases that it can used for in the future.