Ethical AI in facial expression analysis: racial bias


Sham A. H., Aktas K., Rizhinashvili D., Kuklianov D., Alisinanoglu F., Ofodile I., ...Daha Fazla

SIGNAL IMAGE AND VIDEO PROCESSING, cilt.17, sa.2, ss.399-406, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 17 Sayı: 2
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s11760-022-02246-8
  • Dergi Adı: SIGNAL IMAGE AND VIDEO PROCESSING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, zbMATH
  • Sayfa Sayıları: ss.399-406
  • Anahtar Kelimeler: Facial expression recognition (FER), Deep neural networks, Reaction emotion, LSTM, RECOGNITION, RACE
  • Yıldız Teknik Üniversitesi Adresli: Hayır

Özet

Facial expression recognition using deep neural networks has become very popular due to their successful performances. However, the datasets used during the development and testing of these methods lack a balanced distribution of races among the sample images. This leaves a possibility of the methods being biased toward certain races. Therefore, a concern about fairness arises, and the lack of research aimed at investigating racial bias only increases the concern. On the other hand, such bias in the method would decrease the real-world performance due to the wrong generalization. For these reasons, in this study, we investigated the racial bias within popular state-of-the-art facial expression recognition methods such as Deep Emotion, Self-Cure Network, ResNet50, InceptionV3, and DenseNet121. We compiled an elaborated dataset with images of different races, cross-checked the bias for methods trained, and tested on images of people of other races. We observed that the methods are inclined towards the races included in the training data. Moreover, an increase in the performance increases the bias as well if the training dataset is imbalanced. Some methods can make up for the bias if enough variance is provided in the training set. However, this does not mitigate the bias completely. Our findings suggest that an unbiased performance can be obtained by adding the missing races into the training data equally.