Adversarial distractor generation for MCQA: Leveraging in-context learning and rule-based approaches


Yigit G., AMASYALI M. F.

Natural Language Processing Journal, cilt.13, 2025 (Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 13
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1016/j.nlp.2025.100186
  • Dergi Adı: Natural Language Processing Journal
  • Derginin Tarandığı İndeksler: Scopus
  • Anahtar Kelimeler: Adversarial Distractor Generation, attacks, In-context learning, Llama-7b, Multiple Choice Question Answering
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

Adversarial distractor generation strengthens the capability of Multiple Choice Question Answering (MCQA) systems. This study introduces two distinct methods: in-context learning and rule-based techniques, which are designed explicitly for adversarial distractor generation for MCQA datasets. We leverage the LLama-7b model in the in-context learning approach, following a prompt-based strategy for adversarial distractor generation. This methodology is designed to enrich the variety and complexity of the distractors. Besides, this study presents a rule-based approach, mainly focusing on the impact of appending garbage values to the answer options. Via comprehensive experiments, we systematically evaluate the results of two approaches on the T5-Base and RoBERTa-Large across three MCQA datasets. The empirical results demonstrate a constant decrease in performance across all evaluated datasets when utilizing both the in-context learning and rule-based approaches.