Language Testing Research Conference 2020, Nabeul, Tunus, 9 - 13 Haziran 2021, (Yayınlanmadı)
Variability between raters’ decision-making processes has been widely recognized in the field of writing assessment (Cumming, Kantor, & Powers, 2002; Erdosy, 2004), which leads to challenges for test validity, reliability and fairness. Previous studies have explored variability between groups of raters vis-a-vis professional experience factors (Attali, 2016; Cumming, Kantor, & Powers, 2002). We explored the following research questions:
1.What are raters’ thought processes during essay rating?
2.What are raters’ perceived influences of teaching experiences on their decision-making regarding score assignments?
We recruited instructors at English-medium universities from each of two ME contexts: five native-English-speaking instructors in the Kurdish region of Iraq (KRI-NES) and five native-Turkish-speaking instructors in Turkey (Turkish-NNES). The interviews consisted of a think-aloud-protocol (TAP) while rating sample essays and a semi-structured interview that asked participants to reflect on their writing assessment practices, contexts and experiences. We adapted Cumming, Kantor and Power’s (2002) framework to code the TAP data. The semi-structured interview data were open-coded for emergent themes. KRI-NES teachers tended to rely on personal teaching experiences when making rating decision, whereas the Turkish-NNES more so relied on institutional practices. KRI-NES instructors, who described their administration as hands-off, gave more personal evaluations of students writing. Specifically, the KRI-NES participants often compared the sample essays to their own expectations for academic writing or their students. On the other hand, the Turkish-NNES instructors, who indicated strong institutional influences, had a more external-locus for evaluation criteria, specifically adopting institutional practices. Implications for rater training and language assessment literacy will be discussed. Hybrid feedback: The efficacy of combining automated writing corrective feedback and teacher feedback for writing development in an ESL context