Signal, Image and Video Processing, cilt.19, sa.15, 2025 (SCI-Expanded)
Sign language recognition (SLR) plays a pivotal role in fostering communication inclusivity between hearing and non-hearing communities. This study delves into the realm of Turkish Sign Language recognition (TSLR), employing advanced machine learning techniques to bridge communication gaps. Leveraging the BosphorusSign22k dataset, we present a comprehensive analysis of two key models: Long Short-Term Memory (LSTM) and Dynamic Time Warping combined with Decision Tree classifier (DTW+DTC). Our approach involves meticulous preprocessing steps, including data augmentation and feature selection. The results reveal that the DTW+DTC model achieves an accuracy rate of 78%, outperforming the LSTM model with 68% accuracy. Moreover, our ablation study underscores the critical role of preprocessing steps and data quality in model performance. While promising, this study acknowledges limitations in dataset diversity and size, as well as challenges associated with overfitting. The implications extend to the realm of robotics and underline the potential for advancing sign language recognition systems. This work signifies the power of technology in fostering communication inclusivity, opening avenues for further research and innovation in SLR.