Ethical AI in facial expression analysis: racial bias

Sham A. H., Aktas K., Rizhinashvili D., Kuklianov D., Alisinanoglu F., Ofodile I., ...More

SIGNAL IMAGE AND VIDEO PROCESSING, vol.17, no.2, pp.399-406, 2023 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 17 Issue: 2
  • Publication Date: 2023
  • Doi Number: 10.1007/s11760-022-02246-8
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, zbMATH
  • Page Numbers: pp.399-406
  • Keywords: Facial expression recognition (FER), Deep neural networks, Reaction emotion, LSTM, RECOGNITION, RACE
  • Yıldız Technical University Affiliated: No


Facial expression recognition using deep neural networks has become very popular due to their successful performances. However, the datasets used during the development and testing of these methods lack a balanced distribution of races among the sample images. This leaves a possibility of the methods being biased toward certain races. Therefore, a concern about fairness arises, and the lack of research aimed at investigating racial bias only increases the concern. On the other hand, such bias in the method would decrease the real-world performance due to the wrong generalization. For these reasons, in this study, we investigated the racial bias within popular state-of-the-art facial expression recognition methods such as Deep Emotion, Self-Cure Network, ResNet50, InceptionV3, and DenseNet121. We compiled an elaborated dataset with images of different races, cross-checked the bias for methods trained, and tested on images of people of other races. We observed that the methods are inclined towards the races included in the training data. Moreover, an increase in the performance increases the bias as well if the training dataset is imbalanced. Some methods can make up for the bias if enough variance is provided in the training set. However, this does not mitigate the bias completely. Our findings suggest that an unbiased performance can be obtained by adding the missing races into the training data equally.