Signal, Image and Video Processing, cilt.19, sa.12, 2025 (SCI-Expanded)
The semi-supervised learning approach, which utilizes both labeled and unlabeled samples, combines supervised and unsupervised learning methodologies. Among semi-supervised learning algorithms that aim to enhance model performance by extracting information from unlabeled samples, Self-Learning (SL) is widely used. In the SL, pseudo-labels are generated using the prediction of the current model. In the Ensemble Self-Learning (ESL), pseudo-label generation incorporates not only the prediction of the current model but also the predictions derived from models trained in previous iterations. ESL combines ensemble prediction obtained from previously trained models with the prediction of the current model by applying weighting. The final result is then utilized for pseudo-labels generation. This approach is analogous to the velocity vector in the SGD with Momentum approach, which is derived from past gradients. By incorporating the influence of models trained in previous iterations, ESL algorithm introduces an innovative improvement over SL. According to the experimental results, in which all unlabeled samples were pseudo-labeled, ESL achieved, 4.62%-8.79% higher accuracy than SL within the same training duration, depending on the architecture and dataset used.