Technology for Second Language Learning Conference, Iowa, Amerika Birleşik Devletleri, 5 - 08 Kasım 2025, ss.89, (Özet Bildiri)
The genre analysis capacity of ChatGPT has been explored for research articles, corporate social responsibility reports, and narrative short stories (e.g., Kim and Lu, 2024; Yu, 2025). However, little is known about its effectiveness in analyzing genres that are more widely used in higher education, such as critical reflections. Manual analysis of this genre poses unique challenges for researchers due to its highly context-dependent, personalized, and subjective nature and the lack of a standardized move-step model. This study explores the potential of a custom GPT model for rhetorically analyzing 140 critical reflections written by 70 undergraduate students for two reflective tasks in a counterbalanced design: source-based and non-source-based. We investigate (a) ReflectGPT’s accuracy in identifying the rhetorical moves in student-written critical reflections compared to human annotations and (b) the effect of task type on ReflectGPT’s accuracy. We manually annotated the reflections using four moves from Ryan and Ryan’s (2013) reflective scale. We then trained a custom version of OpenAI’s GPT4, ReflectGPT, using 80percent of our dataset for training purposes, 10percent for validation, and 10percent for test, a common practice for small domain-specific datasets (e.g., Yu, 2025). To evaluate ReflectGPT’s performance, we calculate precision, recall, and F1 score for all moves across the tasks. Data analysis is in progress. We expect the findings will contribute to our understanding of AI use for rhetorical analysis, particularly for complex genres such as critical reflections in the current study.