Introducing cosmosGPT: Monolingual Training for Turkish Language Models


KESGİN H. T., YÜCE M. K., Dogan E., Uzun M. E., Uz A., Seyrek H. E., ...Daha Fazla

18th International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2024, Craiova, Romanya, 4 - 06 Eylül 2024 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/inista62901.2024.10683863
  • Basıldığı Şehir: Craiova
  • Basıldığı Ülke: Romanya
  • Anahtar Kelimeler: evaluation datasets, instruction-finetuning, large language models, LLM performance comparison, natural language processing, turkish language models, Turkish NLP
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

The number of open source language models that can produce Turkish is increasing day by day, as in other languages. In order to create the basic versions of such models, the training of multilingual models is usually continued with Turkish corpora. The alternative is to train the model with only Turkish corpora. In this study, we first introduce the cosmosGPT models that we created with this alternative method. Then, we introduce new finetune datasets for basic language models to fulfill user requests and new evaluation datasets for measuring the capabilities of Turkish language models. Finally, a comprehensive comparison of the adapted Turkish language models on different capabilities is presented. The results show that the language models we built with the monolingual corpus have promising performance despite being about 10 times smaller than the others.