Racial Bias Mitigation with Federated Learning Approach Federe Ogrenme Yaklasimiyla Irksal Yanliligin Azaltilmasi


Dervisoglu H., AMASYALI M. F.

8th International Conference on Computer Science and Engineering, UBMK 2023, Burdur, Türkiye, 13 - 15 Eylül 2023, ss.548-553 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/ubmk59864.2023.10286618
  • Basıldığı Şehir: Burdur
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.548-553
  • Anahtar Kelimeler: disparate impact, federated learning, racial bias, statistical parity
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

The use of Machine Learning models as decision support has brought about an increase in the reliability of these models. Especially in cases where data sharing is not desired due to confidentiality, the imbalance in the number of samples belonging to different demographic groups such as age, gender and race in the classrooms may cause bias in the models trained with these data. In our study, it has been shown that racial bias caused by the imbalance of racial information in classrooms can be reduced with the Federated Learning approach, which allows different data sources to contribute to model training without the need for data sharing. In addition, in the study, the results obtained with Federated Learning were compared with Central Learning and it was examined whether reliable models could be obtained without the need for data sharing. In order to compare the model results, metrics such as Accuracy and F1 score, as well as bias metrics such as Disparate Impact and Statistical Parity, which measure the fairness of the model, were also used. When the results were examined, it was seen that the local bias decreased with Federated Learning. The state of NH, where the greatest improvement was seen, decreased from 6.42 in the Disparate Impact results to 1.66 in Federated Learning, and from -0.33 to -0.12 in the Statistical Parity results.