Turkish Sign Language Recognition Using a Fine-Tuned Pretrained Model


Ozgul G., DERDİYOK Ş., PATLAR AKBULUT F.

2nd International Conference on Advanced Engineering, Technology and Applications, ICAETA 2023, İstanbul, Türkiye, 10 - 11 Mart 2023, cilt.1983 CCIS, ss.73-82 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 1983 CCIS
  • Doi Numarası: 10.1007/978-3-031-50920-9_6
  • Basıldığı Şehir: İstanbul
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.73-82
  • Anahtar Kelimeler: Convolutional Neural Networks (CNN), Sign language, Transfer learning
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

Many members of society rely on sign language because it provides them with an alternative means of communication. Hand shape, motion profile, and the relative positioning of the hand, face, and other body components all contribute to the uniqueness of each sign throughout sign languages. Therefore, the field of computer vision dedicated to the study of visual sign language identification is a particularly challenging one. In recent years, many models have been suggested by various researchers, with deep learning approaches greatly improving upon them. In this study, we employ a fine-tuned CNN that has been presented for sign language recognition based on visual input, and it was trained using a dataset that included 2062 images. When it comes to sign language recognition, it might be difficult to achieve the levels of high accuracy that are sought when using systems that are based on machine learning. This is due to the fact that there are not enough datasets that have been annotated. Therefore, the goal of the study is to improve the performance of the model by transferring knowledge. In the dataset that was utilized for the research, there are images of 10 different numbers ranging from 0 to 9, and as a result of the testing, the sign was detected with a level of accuracy that was equal to 98% using the VGG16 pre-trained model.