Automatic reaction emotion estimation in a human-human dyadic setting using Deep Neural Networks


Sham A. H., Tikka P., Lamas D., Anbarjafari G.

SIGNAL IMAGE AND VIDEO PROCESSING, cilt.17, sa.2, ss.527-534, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 17 Sayı: 2
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s11760-022-02257-5
  • Dergi Adı: SIGNAL IMAGE AND VIDEO PROCESSING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, zbMATH
  • Sayfa Sayıları: ss.527-534
  • Anahtar Kelimeler: Facial Expression Recognition (FER), Deep Neural Networks, Reaction emotion, LSTM, EXPRESSIONS
  • Yıldız Teknik Üniversitesi Adresli: Hayır

Özet

While Facial Expression Recognition (FER) has become a broadly applied technology in tracking individual people's emotions, its application in estimating emotion in a human-to-human dyadic interaction is still relatively sparse. This paper describes a study where FER is applied to a dyadic video-mediated interaction to collect facial interaction data that will be used to predict the emotions of one of the interlocutors. To realize this, we utilized the histogram of oriented gradients algorithm to detect human faces from videos by analyzing every frame. Then, we used a Deep Neural Networks (DNN) model to detect the facial expressions of two people who are having a conversation on videos. We measured the facial patterns as indicators of emotions of both interlocutors during the whole interaction. Afterward, we trained a Long Short-Term Memory (LSTM) model to estimate one person's emotions from the video. We performed the analysis on videos of a specific psychiatrist and his patients; then, we performed the patients' emotion estimation. This work shows how our multi-stage DNN (Mini-Xception) and LSTM models can predict the reaction emotions using the patient's facial expressions during the interaction. We believe that the proposed method can be applied to the future generation of facial expressions of virtual characters or social robots when they interact with humans.