The visual language that hearing or speech-impaired individuals communicate with through facial expressions and hand movements is called sign language. The rate of reading and writing sign language is very low. For this reason, hearing or speech-impaired individuals have great difficulty in communicating with other people, especially when benefiting from services such as hospitals and education. In this study, real-time sign language detection and display on the computer screen were performed with deep learning. The movements of hearing or speech-impaired individuals shown with their hands and fingers are detected in front of the camera. As a result of detection, the letter corresponding to the movement is recognized and displayed on the computer screen. YOLOv8 architecture was used in this method. First, a data set was created for the study. The data set consists of 29 letters and 10 numbers. Photographs of sign language movements from 100 different people were taken in the data set. Different changes were made to the photographs in the data set. With these additions, the error that may occur due to any distortion that may occur from the camera was minimized. With the changes made to the photographs, the number of photographs forming the data set increased to 11079. As a result of the study, average stability was 90.7%, mAP was 85.8%, and recall was 81.4%.
| Primary Language | English |
|---|---|
| Subjects | Deep Learning |
| Journal Section | Research Article |
| Authors | |
| Submission Date | June 6, 2024 |
| Acceptance Date | March 4, 2025 |
| Early Pub Date | May 30, 2025 |
| Publication Date | May 31, 2025 |
| Published in Issue | Year 2025 Volume: 13 Issue: 2 |
Academic Platform Journal of Engineering and Smart Systems