Training with growing sets: A comparative study


Yegin M. N., Kurttekin O., Bahsi S. K., AMASYALI M. F.

EXPERT SYSTEMS, cilt.39, sa.7, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 39 Sayı: 7
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1111/exsy.12961
  • Dergi Adı: EXPERT SYSTEMS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, ABI/INFORM, Applied Science & Technology Source, Biotechnology Research Abstracts, Business Source Elite, Business Source Premier, Compendex, Computer & Applied Sciences, INSPEC, Psycinfo, Library, Information Science & Technology Abstracts (LISTA)
  • Anahtar Kelimeler: curriculum learning, deep learning, neural network, optimization, self-paced learning
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

Being similar to and inspired from the process of human education, curriculum learning methods -or CL methods-sort the input examples from easy to difficult, then add them to the training set in that order. Considering the fact that CL research is most concerned with determining the direction (from easy to difficult vs. from difficult to easy) and the criteria of this sorting, vast and various studies have emerged in the literature addressing both types of sorting. However, this results in a contradiction that demands finding a common aspect of ordering in both directions. This study argues that this required common aspect lies in the gradual enlargement of training. In other words, it is claimed that the success of CL methods does not depend on which criteria or in which direction the ordering is made. Extensive experiments have been conducted on various datasets using different deep learning models in order to test this claim. It was observed that random ordering had achieved competitive results with CL methods. Moreover, random ordering proved to be faster than other CL methods as it eliminates the cost of sorting computation. Based on these results, using random ordered growing sets as a baseline in future CL studies is recommended. Moreover, the possibly to improve the optimization performance via training with growing sets in theoretical perspective is also explained.