Enhancing multiple-choice question answering through sequential fine-tuning and Curriculum Learning strategies

Yigit G., AMASYALI M. F.

Knowledge and Information Systems, vol.65, no.11, pp.5025-5042, 2023 (SCI-Expanded) identifier

  • Publication Type: Article / Article
  • Volume: 65 Issue: 11
  • Publication Date: 2023
  • Doi Number: 10.1007/s10115-023-01918-2
  • Journal Name: Knowledge and Information Systems
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, PASCAL, ABI/INFORM, Applied Science & Technology Source, Computer & Applied Sciences, INSPEC
  • Page Numbers: pp.5025-5042
  • Keywords: Curriculum-learning, Fine-tuning, MCQA, RoBERTa, T5
  • Yıldız Technical University Affiliated: Yes


With the transformer-based pre-trained language models, multiple-choice question answering (MCQA) systems can reach a particular level of performance. This study focuses on inheriting the benefits of contextualized language representations acquired by language models and transferring and sharing information among MCQA datasets. In this work, a method called multi-stage-fine-tuning considering the Curriculum Learning strategy is presented, which proposes sequencing not training samples, but the source datasets in a meaningful order, not randomized. Consequently, an extensive series of experiments over various MCQA datasets shows that the proposed method reaches remarkable performance enhancements than classical fine-tuning over picked baselines T5 and RoBERTa. Moreover, the experiments are conducted on merged source datasets, and the proposed method achieves improved performance. This study shows that increasing the number of source datasets and even using some small-scale datasets helps build well-generalized models. Moreover, having a higher similarity between source datasets and target also plays a vital role in the performance.