FedLDC: Federated Learning with Loss-Dependent Coefficients in Pedestrian Detection


Akpınar E., Bolat B., Taşkıran M.

2024 11th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, Hindistan, 21 - 22 Mart 2024, ss.256-261

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/spin60856.2024.10511799
  • Basıldığı Şehir: Noida
  • Basıldığı Ülke: Hindistan
  • Sayfa Sayıları: ss.256-261
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

One of the advantages of autonomous driving systems is that they prevent possible accidents and thus reduce the number of casualties and injuries. With the development of artificial intelligence, there are many studies in this field. Achieving a successful generalization performance in such applications is possible by training the model with various backgrounds and as much unique data as possible. However, the creation of such diverse and large datasets by a single community requires a huge labor and financial cost, and even if such datasets are created, the direct sharing of these datasets brings personal data privacy problems. Since federated learning eliminates the need for data sharing, it eliminates privacy issues and allows many datasets to be used together for training without the need to share different datasets with a server. Many federated learning methods such as FedAvg, FedProx, SCAFFOLD have been proposed in the literature. However, in applications where datasets of different difficulty and comprehensiveness are used, the global model weights generated are not sufficient to capture a good representation of the datasets used in the application. For this reason, we propose Federated Learning with Loss-Dependent Coefficients (FedLDC), which uses the Faster R-CNN ResNet50 FPN network as local model and offers a new federated learning approach that assigns higher coefficients to datasets with high loss values but also with more diverse background and unique data by utilizing the loss value during training. In our study, we compared FedLDC with both FedAvg, the first example of federated learning, and the standard training methodology, which is not federated. As a result of the tests we performed on three different datasets, FedLDC achieved the most successful result with a Miss Rate (MR) of 0.13 on Caltech Pedestrian and 0.07 on CityPersons, while it achieved a MR of 0.10 on the ECP Day dataset, competitive with the standard training methodology,  which achieved a MR of 0.08.