Training with growing sets: A comparative study


Yegin M. N. , Kurttekin O., Bahsi S. K. , AMASYALI M. F.

EXPERT SYSTEMS, 2022 (Journal Indexed in SCI) identifier identifier

  • Publication Type: Article / Article
  • Publication Date: 2022
  • Doi Number: 10.1111/exsy.12961
  • Title of Journal : EXPERT SYSTEMS
  • Keywords: curriculum learning, deep learning, neural network, optimization, self-paced learning

Abstract

Being similar to and inspired from the process of human education, curriculum learning methods -or CL methods-sort the input examples from easy to difficult, then add them to the training set in that order. Considering the fact that CL research is most concerned with determining the direction (from easy to difficult vs. from difficult to easy) and the criteria of this sorting, vast and various studies have emerged in the literature addressing both types of sorting. However, this results in a contradiction that demands finding a common aspect of ordering in both directions. This study argues that this required common aspect lies in the gradual enlargement of training. In other words, it is claimed that the success of CL methods does not depend on which criteria or in which direction the ordering is made. Extensive experiments have been conducted on various datasets using different deep learning models in order to test this claim. It was observed that random ordering had achieved competitive results with CL methods. Moreover, random ordering proved to be faster than other CL methods as it eliminates the cost of sorting computation. Based on these results, using random ordered growing sets as a baseline in future CL studies is recommended. Moreover, the possibly to improve the optimization performance via training with growing sets in theoretical perspective is also explained.