Second Language Research Forum 2019, Michigan, Amerika Birleşik Devletleri, 20 - 22 Eylül 2019, (Yayınlanmadı)
Despite the increasing recognition of the plurilithic nature of English as the global academic lingua franca, language assessment still heavily relies on the ideal language competence of the monolingual native speaker (Jenkins & Leung, 2017; Klein, 1998; McNamara, 2014; Ortega, 2016). Meanwhile, recent English for Academic Purposes (EAP) research has shown that published expert writing is not remarkably different from proficient L2 writing (Lu, Kisselev, Yoon & Amory; 2018, O’Donnell, Römer, Ellis, 2013; Römer, 2009). However, such research used expert corpora comprised of a mix of both native and non-native published writing, which is problematic because not only are the papers revised and edited based on native speaker norms (Seidlhofer, 2004), but also most journal guidelines today are still in favor of these norms (McKinley & Rose, 2018). In addition, the successful non-native writers as models for L2 writing have recently been shown to be strong predictors of writing quality (Monteiro, Crossley, Kyle, 2018; Naismith, Han, Juffs, Hill, & Zheng, 2018). Therefore, in order to further explore how L2 benchmarks can be used for determining L2 writing quality, this study analyzes the phraseological features of advanced learner writing (C1 and C2 CEFR proficiency levels) from EF - Cambridge Open Language Database (Geertzen, Alexopoulou & Korhonen, 2013) in comparison to unedited English research articles by non-native scholars from The Corpus Written English as a Lingua Franca in Academic Settings (WrELFA, 2015) across six first language backgrounds (Chinese, French, Russian, Spanish, Portuguese, and Italian). Both continuous (n-grams) and discontinuous (phrase-frames) word sequences were extracted via the software tools AntGram 1.0 (Anthony, 2018), AntConc 3.5.8 (Anthony, 2019), and Wordsmith 7 (Scott, 2016). The findings, based on quantitative and qualitative phraseological analyses of the differences and similarities across the datasets, allow us to get a closer look at advanced L2 student writing in comparison to L2 scholars as benchmark. Finally, following the high degree of communicative effectiveness found in both corpora, we argue that advanced L2 writing should be an aspect of defining language assessment benchmarks in order to go beyond the native speaker norms, an issue overlooked by language assessment research.