Ensemble Self-Learning for Pseudo-Labeling


Akdis M., ELBİR A.

Signal, Image and Video Processing, cilt.19, sa.12, 2025 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 19 Sayı: 12
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1007/s11760-025-04618-2
  • Dergi Adı: Signal, Image and Video Processing
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, zbMATH
  • Anahtar Kelimeler: Ensemble learning, Pseudo-labeling, self-learning, Semi-supervised learning
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

The semi-supervised learning approach, which utilizes both labeled and unlabeled samples, combines supervised and unsupervised learning methodologies. Among semi-supervised learning algorithms that aim to enhance model performance by extracting information from unlabeled samples, Self-Learning (SL) is widely used. In the SL, pseudo-labels are generated using the prediction of the current model. In the Ensemble Self-Learning (ESL), pseudo-label generation incorporates not only the prediction of the current model but also the predictions derived from models trained in previous iterations. ESL combines ensemble prediction obtained from previously trained models with the prediction of the current model by applying weighting. The final result is then utilized for pseudo-labels generation. This approach is analogous to the velocity vector in the SGD with Momentum approach, which is derived from past gradients. By incorporating the influence of models trained in previous iterations, ESL algorithm introduces an innovative improvement over SL. According to the experimental results, in which all unlabeled samples were pseudo-labeled, ESL achieved, 4.62%-8.79% higher accuracy than SL within the same training duration, depending on the architecture and dataset used.