Merged Local Neighborhood Difference Pattern for Facial Expression Recognition
P. Shanthi1, S. Nickolas2

1P. Shanthi*, Department of Computer Applications, National Institute of Technology, Tiruchirappalli, Tamil Nadu, India.
2S. Nickolas, Department of Computer Applications, National Institute of Technology, Tiruchirappalli, Tamil Nadu, India.

Manuscript received on November 13, 2019. | Revised Manuscript received on 22 November, 2019. | Manuscript published on December 10, 2019. | PP: 4133-4141 | Volume-9 Issue-2, December 2019. | Retrieval Number: B7461129219/2019©BEIESP | DOI: 10.35940/ijitee.B7461.129219
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Facial expression based emotion recognition is one of the popular research domains in the computer vision field. Many machine vision-based feature extraction methods are available to increase the accuracy of the Facial Expression Recognition (FER). In feature extraction, neighboring pixel values are manipulated in different ways to encode the texture information of muscle movements. However, defining the robust feature descriptor is still a challenging task to handle the external factors. This paper introduces the Merged Local Neighborhood Difference Pattern (MLNDP) to encode and merge the two-level of representation. At the first level, each pixel is encoded with respect to center pixel, and at the second level, encoding is carried out based on the relationship with the closest neighboring pixel. Finally, two levels of encodings are logically merged to retain only the texture that is positively encoded from the two levels. Further, the feature dimension is reduced using chi-square statistical test, and the final classification is carried out using multiclass SVM on two datasets namely, CK+ and MMI. The proposed descriptor compared against other local descriptors such as LDP, LTP, LDN, and LGP. Experimental results show that our proposed feature descriptor is outperformed other descriptors with 97.86% on CK+ dataset and 95.29% on MMI dataset. The classifier comparison confirms the results that the combination of MLNDP with multiclass SVM performs better than other combinations in terms of local descriptor and classifier. 
Keywords: Emotion, Facial Expression, Merged local Neighborhood Difference Pattern, Support Vector Machine
Scope of the Article: Pattern Recognition