A cyclical loss-based optimization algorithm for pretraining LLMs on noisy data


KESGİN H. T., AMASYALI M. F.

Knowledge-Based Systems, cilt.328, 2025 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 328
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1016/j.knosys.2025.114189
  • Dergi Adı: Knowledge-Based Systems
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC, Library and Information Science Abstracts, Library, Information Science & Technology Abstracts (LISTA)
  • Anahtar Kelimeler: Adaptive optimization, Cyclical scheduling, Data selection, In-training filtering, Large language models, LLM pretraining, Loss-based filtering, Low-resource languages, Noisy data, Web-scale datasets
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

Large language models (LLMs) depend on vast web-scale datasets, which frequently include noisy or low-quality samples that degrade performance and fairness-despite conventional data cleaning. This paper introduces an in-training filtering approach that selectively ignores noisy data points based on real-time loss statistics during training. The approach combines deterministic and probabilistic selection mechanisms using robust loss-based metrics and cyclically adjusted thresholds to balance stability and diversity. Evaluations on Turkish-language datasets demonstrate that this strategy reduces validation loss and improves downstream task accuracy without any preprocessing. By integrating filtering directly into the training loop, the method maintains data diversity, requires minimal overhead, and improves learning efficiency-offering a scalable alternative for robust LLM pretraining in noisy or low-resource environments.