Optimizing Large Language Models for Turkish: New Methodologies in Corpus Selection and Training


Creative Commons License

KESGİN H. T., Uzun M. E., Erdem Y., AMASYALI M. F., YÜCE M. K., Uz A., ...Daha Fazla

2024 Innovations in Intelligent Systems and Applications Conference, ASYU 2024, Ankara, Türkiye, 16 - 18 Ekim 2024, (Tam Metin Bildiri) identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/asyu62119.2024.10757019
  • Basıldığı Şehir: Ankara
  • Basıldığı Ülke: Türkiye
  • Anahtar Kelimeler: Cross-Lingual Transfer Learning, Few-Shot Learning, Large Language Model Optimization, Multilingual Models, Natural Language Processing, Synthetic Datasets, Turkish Language Models, Zero-Shot Learning
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

In this study, we develop and assess new corpus selection and training methodologies to improve the effectiveness of Turkish language models. Specifically, we adapted Large Language Model generated datasets and translated English datasets into Turkish, integrating these resources into the training process. This approach led to substantial enhancements in model accuracy for both few-shot and zero-shot learning scenarios. Furthermore, the merging of these adapted models was found to markedly improve their performance. Human evaluative metrics, including task-specific performance assessments, further demonstrated that these adapted models possess a greater aptitude for comprehending the Turkish language and addressing logic-based queries. This research underscores the importance of refining corpus selection strategies to optimize the performance of multilingual models, particularly for under-resourced languages like Turkish.