Natural Language Processing Journal, cilt.13, 2025 (Scopus)
Adversarial distractor generation strengthens the capability of Multiple Choice Question Answering (MCQA) systems. This study introduces two distinct methods: in-context learning and rule-based techniques, which are designed explicitly for adversarial distractor generation for MCQA datasets. We leverage the LLama-7b model in the in-context learning approach, following a prompt-based strategy for adversarial distractor generation. This methodology is designed to enrich the variety and complexity of the distractors. Besides, this study presents a rule-based approach, mainly focusing on the impact of appending garbage values to the answer options. Via comprehensive experiments, we systematically evaluate the results of two approaches on the T5-Base and RoBERTa-Large across three MCQA datasets. The empirical results demonstrate a constant decrease in performance across all evaluated datasets when utilizing both the in-context learning and rule-based approaches.