Araştırma Makalesi
BibTex RIS Kaynak Göster

Yıl 2026, Cilt: 22 Sayı: 1 , 132 - 141 , 30.03.2026
https://doi.org/10.18466/cbayarfbe.1635817
https://izlik.org/JA22BC49KN

Öz

Kaynakça

  • [1]. Takahashi, T, F, Kishino. 1992. A hand gesture recognition method and its application. Systems and Computers in Japan, 23 (3), pp. 38-48. (htpss://doi.org/10.1002/scj.4690230304)
  • [2]. Grobel, K, Hienz, H. 1996. Fuzzy video-based handshape recognition. Proceedings of the ACM Symposium on Applied Computing, pp. 614-618. (https://doi.org/10.1145/331119.331469.)
  • [3]. Watanabe, K, Iwai, Y, Yagi, Y, Yachida, M. 1999. Recognition of sign language alphabet using coloured gloves. Systems and Computers in Japan, 30 (4), pp. 51-61. (https://doi.org/10.1002/(SICI)1520-684X(199904)30:4<51::AID-SCJ6>3.0.CO;2-%23.)
  • [4]. Wang, H, Leu, MC, Oz, C. 2006. American Sign Language recognition using multi-dimensional Hidden Markov Models. Journal of Information Science and Engineering, 22 (5), pp. 1109-1123. (htpss://doi.org/10.6688/JISE.2006.22.5.8.)
  • [5]. Shanableh, T, Assaleh, K. 2011. User-independent recognition of Arabic sign language for facilitating communication with the deaf community. Digital Signal Processing: A Review Journal, 21 (4), pp. 535-542. (htpss://doi.org/10.1016/j.dsp.2011.01.015.)
  • [6]. Shamrat, FM. 2021. Bangla numerical sign language recognition using convolutional neural network CNNs, Indonesian Journal of Electrical Engineering and Computer Science, 23, pp. 405–413. (https://doi.org/10.11591/ijeecs.v23.i1.pp405-413.)
  • [7]. Rastgoo, R, Kiania, K, Escalerab, S. 2021. Sign Language Recognition: A Deep Survey. Expert Systems with Applications, 164, pp. 113794. (htpss://doi.org/10.1016/j.eswa.2020.113794.)
  • [8]. Yu, S, Jia, S, Xu, C. 2017. Convolutional neural networks for hyperspectral image classification. Neurocomputing, 219, pp. 88-98. (https://doi.org/10.1016/j.neucom.2016.09.)
  • [9]. Nam, Y, Lee, C. 2021. Cascaded convolutional neural network architecture for speech emotion recognition in noisy conditions. Sensors, 21(13), pp. 4399. (https://doi.org/10.3390/s21134399.)
  • [10]. Anantha, RG, Kishore, PVV, Sastry, ASCS, Anil, D, Kiran, K E. 2018. Selfie continuous sign language recognition with neural network classifier, Lecture Notes in Electrical Engineering, 434, 31-40. (htpss://doi.org/10.1111/exsy.12197.)
  • [11]. Rao, GA, Syamala, K, Kishore, PVV, Sastry, ASC. 2018. Deep convolutional neural networks for sign language recognition. 2018 Conference on Signal Processing And Communication Engineering Systems. pp. 194-197, Vijayawada, India, 04-05 January 2018. (htpss://doir.org/10.1109/SPACES.2018.8316344.)
  • [12]. Siddique, S, Islam, S, Neon, EE, Sabbir, T, Naheen, IT, Khan, R. 2023. Deep Learning-based Bangla Sign Language Detection with an Edge Device. Intelligent Systems with Applications, 18, pp. 200224. (https://doi.org/10.1016/j.iswa.2023.200224 30 March 2023.)
  • [13]. Jamaladdin, H, Nigar, A, Aykhan, N, Samir, D, Toghrul, T. 2023. Development of a hybrid word recognition system and dataset for the Azerbaijani Sign Language dactyl alphabet, Speech Communication. (153) 102960, (https://doi.org/10.1016/j.specom.2023.102960.)
  • [14]. Ong, EJ, Cooper, H, Pugeault, N, Bowden, R. 2012. Sign language recognition using sequential pattern trees. Conference on Computer Vision and Pattern Recognition, Washington-USA, pp. 2200–2207. (https://doi.org/10.1109/CVPR.2012.6247928.)
  • [15]. Ong, EJ, Koller, O, Pugeault, N, Bowden, R. 2014. Sign spotting using hierarchical sequential patterns with temporal intervals. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington-USA, pp. 1923–193. (http://dx.doi.org/10.1109/CVPR.2014.248.)
  • [16]. Athitsos, V, Neidle, C, Sclaroff, S, Nash, J, Stefan, A, Yuan, Q, Thangali, A, 2008. The American Sign Language lexicon video dataset, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Alaska-USA, pp. 1–8. (http://dx.doi.org/10.1109/CVPRW.2008.4563181.)
  • [17]. Neidle, C, Thangali, A, Sclaroff, S. 2012. Challenges in development of the American Sign Language lexicon video dataset (asllvd) corpus, Proc.5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, Language Resources and Evaluation Conference (LREC) 2012, İstanbul-Turkey, pp. 1-8.
  • [18]. Kim, JH, Kim, N, Park, H, Park, JC. 2016. Enhanced sign language transcription system via hand tracking and pose estimation, Journal of Computing Science and Engineering, 10 (3), 95–101. (http://dx.doi.org/10.5626/JCSE.2016.10.3.95.)
  • [19]. Metaxas, D, Dilsizian, M, Neidle, C. 2018. Scalable ASL sign recognition using model-based machine learning and linguistically annotated corpora, 8th Workshop on the Representation & Processing of Sign Languages: Involving the Language Community, Language Resources and Evaluation Conference, Miyazaki-Japan, pp. 1-5, 12 Mayıs, 2018.
  • [20]. Oszust, M, Wysocki, M. 2013. Polish sign language words recognition with Kinect, 2013 6th International Conference on Human System Interactions (HSI), Gdansk-Poland, 219–226, 6-8 Haziran, 2013. (http://dx.doi.org/10.1109/HSI.2013.6577826.)
  • [21]. Oszust, M, Wysocki. M. 2014. Some Approaches to Recognition of Sign Language Dynamic Expressions with Kinect. Advances in Intelligent Systems and Computing, vol 300, Hippe Zdzisaw S., Springer Cham, pp. 75-86. (http://dx.doi.org/10.1007/978-3-319-08491-6_7.)
  • [22]. Kapuscinski, T, Oszust, M, Wysocki, M, Warchol D. 2015. Recognition of hand gestures observed by depth cameras, International Journal of Advanced Robotic Systems, 12 (4), 36, pp. 1-15. (http://dx.doi.org/10.5772/60091.)
  • [23]. Ronchetti, F, Quiroga, F, Estrebou, CA, Lanzarini, LC, Rosete, A. 2016. LSA64: an Argentinian sign language dataset, CACIC 2016, Roma-Italy, pp. 1-10, 3-7. (http://dx.doi.org/10.48550/arXiv.2310.17429.)
  • [24]. Ronchetti, F. 2017. Thesis Overview: Dynamic Gesture Recognition and its Application to Sign Language, Journal of Computer Science and Technology, 17, pp. 1–10. (http://dx.doi.org/10.24215/16666038.17.e21.)
  • [25]. Chai, X, Wang, H, Chen, X. 2014. The devising large vocabulary of Chinese sign language database and baseline evaluations, Technical report VIPL-TR-14- SLR-001. Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, 2014. (http://dx.doi.org/10.11999/JEIT221051.)
  • [26]. Rusul, HH, Rasha, MH, Inaam, SA. 2023. Yolo Versions Architecture: Review, International Journal of Advances in Scientific Research and Engineering, Vol. 9, 11, pp.1-20. (https://doi.org/10.31695/IJASRE.2023.9.11.7.)
  • [27]. Terven, J, Diana-Margarita, C, Julio-Alejandro, R. 2023. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS, Machine Learning and Knowledge Extraction, Vol. 5, 4, 1680-1716. (https://doi.org/10.3390/make5040083.)
  • [28]. Ameen, C, Vadera, S. 2017. A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images. Expert Syst. 34 (3), e12197. (http://dx.doi.org/10.1111/exsy.12197.)
  • [29]. Beena, M. 2017. Automatic Sign Language Finger Spelling Using Convolution Neural Network: Analysis.
  • [30]. Masood, S, Thuwal, HC, Srivastava, A. 2018. American signlanguage character recognition using convolution neural network. Computing and Informatics. Springer Singapore, Singapore, pp. 403–412. (http://dx.doi.org/10.1007/978-3-319-16178-5_40.)
  • [31]. Kim, T, Keane, J, Wang, W, Tang, H, Riggle, J, Shakhnarovich, G, Brentari, D, Livescu, K. 2017. Lexicon-free fingerspelling recognition from video: Data, models, and signer adaptation. Comput. Speech Lang. 46, pp. 209–232. (http://dx.doi.org/10.1016/j.csl.2017.05.009.)
  • [32]. Taskiran, M, Killioglu, M, Kahraman, N. 2018. A real-time system for recognition of American Sign Language by using deep learning. In: 2018 41st International Conference on Telecommunications and Signal Processing. TSP, pp. 1–5. (http://dx.doi.org/10.1109/TSP.2018.8441304.)
  • [33]. Shi, B, Rio, AMD, Keane, J, Brentari, D, Shakhnarovich, G, Livescu, K. 2019. Fingerspelling recognition in the wild with iterative visual attention. In: 2019 IEEE/CVF International Conference on Computer Vision. ICCV, IEEE, (http://dx.doi.org/10.1109/iccv.2019.00550.)
  • [34]. Shi, B, Rio, AMD, Keane, J, Michaux, J, Brentari, D, Shakhnarovich, G, Livescu, K. 2018. American Sign Language fingerspelling recognition in the wild. In: 2018 IEEE Spoken Language Technology Workshop. SLT, IEEE, (http://dx.doi.org/10.1109/slt.2018.8639639.)
  • [35]. Warcho, D, Kapuski, T, Wysocki, M. 2019. Recognition of fingerspelling sequences in polish sign language using point clouds obtained from depth images. Sensors, 19 (5), 1078. (http://dx.doi.org/10.3390/s19051078.)
  • [36]. Raghuveera, T, Deepthi, R, Mangalashri, R, Akshaya, R. 2020. A depth-based Indian Sign Language recognition using Microsoft Kinect, Sadhana, 45-34, pp. 1-13. (http://dx.doi.org/10.1007/s12046-019-1250-6.)
  • [37]. Kwolek, B, Baczynski, W, Sako, S. 2021. Recognition of JSL fingerspelling using deep convolutional neural networks. Neurocomputing, 456, pp. 586–598. (http://dx.doi.org/10.1016/j.neucom.2021.03.133.)
  • [38]. Mannan, A, Abbasi, A, Javed, AR, Ahsan, A, Gadekallu, TR, Xin, Q. 2022. Hypertuned Deep Convolution Neural Network for Sing Language Recognition. Computational Intelligence and Neuroscience, pp. 1–10. (https://doi.org/10.1155/2022/1450822.)
  • [39]. Gadekallu, TR, Srivastava, G, Liyanage, M, Iyapparaja, M, Chowdhary, CL, Koppu, S, Maddikunta, PKR. 2022. Hand gesture recognition based on a Harris Hawks optimized convolution neural network. Comput. Electr. Eng. 100, 107836. (http://dx.doi.org/10.1016/j.compeleceng.2022.107836.)
  • [40]. Aliyev, S, Almisreb, AA, Turaev, S. 2022. Azerbaijani sign language recognition using machine learning approach. J. Phys. Conf. Ser. 2251 (1), 012007. (http://dx.doi.org/10.1088/1742-6596/2251/1/012007.)
  • [41]. Angona, TM. 2020. Automated Bangla sign language translation system for alphabets by means of MobileNet, TELKOMNIKA (Telecommunication Computing Electronics and Control), 18, pp. 1292–1301. (https://doi.org/10.12928/telkomnika. v18i3.15311.)
  • [42]. Talukder, D, Jahara, F, Barua, S, Haque, MM. 2021. OkkhorNama: BdSL image dataset for real time object detection algorithms, IEEE Region 10 Symposium, pp. 1–6. (https://doi.org/10.1109/TENSYMP52854.2021.9550907.)
  • [43]. Siddique, S, Islam, S, Neon, EE, Sabbir, T, Naheen, IT, Khan, R. 2023. Deep Learning-based Bangla Sign Language Detection with an Edge Device, Intelligent Systems with Applications, 18, pp. 200224. (https://doi.org/10.1016/j.iswa.2023.200224 30 March 2023.)
  • [44]. Özcan, T, Baştürk, A. 2021. ERUSLR a new Turkish sign language dataset and its recognition using hyper parameter optimization aided convolution neural network, Journal of Faculty Engineering Architecture of Gazi Universty, 36:1, pp. 527-542. (htpss://doi.org/ 10.17341/gazimmfd.746793.)
  • [45]. Miah, ASM, Shin, J, Hasan, MAH, Rahim, MA. 2022. BenSignNet: Bengali sign language alphabet recognition using concatenated segmentation and convolutional neural network, Applied Sciences, 12(8), pp. 3933. (https://doi.org/10.3390/ app12083933.)
  • [46]. Pacal, I, Alaftekin, M. 2023. CNN-Based approaches for automatic recognition of Turkish sign language, Journal of the Instute of Science Technology, 13(2), pp. 760-777. (https://doi.org/10.55525/tjst.1073116. )
  • [47]. Miah, ASM, Shin, J, Hasan, MAH, Rahim, MA. 2022. BenSignNet: Bengali sign language alphabet recognition using concatenated segmentation and convolutional neural network, Applied Sciences, 12(8), pp. 3933.

Real-Time Word Detection in Turkish Sign Language with Deep Learning

Yıl 2026, Cilt: 22 Sayı: 1 , 132 - 141 , 30.03.2026
https://doi.org/10.18466/cbayarfbe.1635817
https://izlik.org/JA22BC49KN

Öz

Communication occurs when people can mutually understand each other. Hearing-impaired people have great difficulties communicating with the people around them. Hearing-impaired individuals can often understand others through lip reading. However, they often have difficulty expressing themselves to others. The use of sign language has not become widespread around the world. Hearing impaired language; Apart from hearing impaired people, the number of people who know is very low. The study aims to detect the 50 most commonly used words of hearing-impaired individuals in hospitals and especially in emergency services, using deep learning. The study is a word-based detection process, not a letter-based one. In the study, a movement was detected, not a single photograph. For the study, a data set was created using videos taken from different angles of 50 words used in hospitals by 100 volunteers. Grayscale conversion, tilt augmentation, blurring, variability enhancement, noise addition, image brightness change, colour vividness change, perspective change, sizing, and position change were added to the photographs that make up the data set. With these additions, the error that may occur due to any distortion that may occur from the camera is minimized. Thus, the accuracy rate in the detection process with images taken from the camera in real-time has been increased. The created data set was run on the YOLOv8 algorithm. The model achieved an average precision of 95.0% and a mean average precision (mAP) of 95.1%. An accuracy rate of 89.40% was achieved in real-world testing.

Kaynakça

  • [1]. Takahashi, T, F, Kishino. 1992. A hand gesture recognition method and its application. Systems and Computers in Japan, 23 (3), pp. 38-48. (htpss://doi.org/10.1002/scj.4690230304)
  • [2]. Grobel, K, Hienz, H. 1996. Fuzzy video-based handshape recognition. Proceedings of the ACM Symposium on Applied Computing, pp. 614-618. (https://doi.org/10.1145/331119.331469.)
  • [3]. Watanabe, K, Iwai, Y, Yagi, Y, Yachida, M. 1999. Recognition of sign language alphabet using coloured gloves. Systems and Computers in Japan, 30 (4), pp. 51-61. (https://doi.org/10.1002/(SICI)1520-684X(199904)30:4<51::AID-SCJ6>3.0.CO;2-%23.)
  • [4]. Wang, H, Leu, MC, Oz, C. 2006. American Sign Language recognition using multi-dimensional Hidden Markov Models. Journal of Information Science and Engineering, 22 (5), pp. 1109-1123. (htpss://doi.org/10.6688/JISE.2006.22.5.8.)
  • [5]. Shanableh, T, Assaleh, K. 2011. User-independent recognition of Arabic sign language for facilitating communication with the deaf community. Digital Signal Processing: A Review Journal, 21 (4), pp. 535-542. (htpss://doi.org/10.1016/j.dsp.2011.01.015.)
  • [6]. Shamrat, FM. 2021. Bangla numerical sign language recognition using convolutional neural network CNNs, Indonesian Journal of Electrical Engineering and Computer Science, 23, pp. 405–413. (https://doi.org/10.11591/ijeecs.v23.i1.pp405-413.)
  • [7]. Rastgoo, R, Kiania, K, Escalerab, S. 2021. Sign Language Recognition: A Deep Survey. Expert Systems with Applications, 164, pp. 113794. (htpss://doi.org/10.1016/j.eswa.2020.113794.)
  • [8]. Yu, S, Jia, S, Xu, C. 2017. Convolutional neural networks for hyperspectral image classification. Neurocomputing, 219, pp. 88-98. (https://doi.org/10.1016/j.neucom.2016.09.)
  • [9]. Nam, Y, Lee, C. 2021. Cascaded convolutional neural network architecture for speech emotion recognition in noisy conditions. Sensors, 21(13), pp. 4399. (https://doi.org/10.3390/s21134399.)
  • [10]. Anantha, RG, Kishore, PVV, Sastry, ASCS, Anil, D, Kiran, K E. 2018. Selfie continuous sign language recognition with neural network classifier, Lecture Notes in Electrical Engineering, 434, 31-40. (htpss://doi.org/10.1111/exsy.12197.)
  • [11]. Rao, GA, Syamala, K, Kishore, PVV, Sastry, ASC. 2018. Deep convolutional neural networks for sign language recognition. 2018 Conference on Signal Processing And Communication Engineering Systems. pp. 194-197, Vijayawada, India, 04-05 January 2018. (htpss://doir.org/10.1109/SPACES.2018.8316344.)
  • [12]. Siddique, S, Islam, S, Neon, EE, Sabbir, T, Naheen, IT, Khan, R. 2023. Deep Learning-based Bangla Sign Language Detection with an Edge Device. Intelligent Systems with Applications, 18, pp. 200224. (https://doi.org/10.1016/j.iswa.2023.200224 30 March 2023.)
  • [13]. Jamaladdin, H, Nigar, A, Aykhan, N, Samir, D, Toghrul, T. 2023. Development of a hybrid word recognition system and dataset for the Azerbaijani Sign Language dactyl alphabet, Speech Communication. (153) 102960, (https://doi.org/10.1016/j.specom.2023.102960.)
  • [14]. Ong, EJ, Cooper, H, Pugeault, N, Bowden, R. 2012. Sign language recognition using sequential pattern trees. Conference on Computer Vision and Pattern Recognition, Washington-USA, pp. 2200–2207. (https://doi.org/10.1109/CVPR.2012.6247928.)
  • [15]. Ong, EJ, Koller, O, Pugeault, N, Bowden, R. 2014. Sign spotting using hierarchical sequential patterns with temporal intervals. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington-USA, pp. 1923–193. (http://dx.doi.org/10.1109/CVPR.2014.248.)
  • [16]. Athitsos, V, Neidle, C, Sclaroff, S, Nash, J, Stefan, A, Yuan, Q, Thangali, A, 2008. The American Sign Language lexicon video dataset, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Alaska-USA, pp. 1–8. (http://dx.doi.org/10.1109/CVPRW.2008.4563181.)
  • [17]. Neidle, C, Thangali, A, Sclaroff, S. 2012. Challenges in development of the American Sign Language lexicon video dataset (asllvd) corpus, Proc.5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, Language Resources and Evaluation Conference (LREC) 2012, İstanbul-Turkey, pp. 1-8.
  • [18]. Kim, JH, Kim, N, Park, H, Park, JC. 2016. Enhanced sign language transcription system via hand tracking and pose estimation, Journal of Computing Science and Engineering, 10 (3), 95–101. (http://dx.doi.org/10.5626/JCSE.2016.10.3.95.)
  • [19]. Metaxas, D, Dilsizian, M, Neidle, C. 2018. Scalable ASL sign recognition using model-based machine learning and linguistically annotated corpora, 8th Workshop on the Representation & Processing of Sign Languages: Involving the Language Community, Language Resources and Evaluation Conference, Miyazaki-Japan, pp. 1-5, 12 Mayıs, 2018.
  • [20]. Oszust, M, Wysocki, M. 2013. Polish sign language words recognition with Kinect, 2013 6th International Conference on Human System Interactions (HSI), Gdansk-Poland, 219–226, 6-8 Haziran, 2013. (http://dx.doi.org/10.1109/HSI.2013.6577826.)
  • [21]. Oszust, M, Wysocki. M. 2014. Some Approaches to Recognition of Sign Language Dynamic Expressions with Kinect. Advances in Intelligent Systems and Computing, vol 300, Hippe Zdzisaw S., Springer Cham, pp. 75-86. (http://dx.doi.org/10.1007/978-3-319-08491-6_7.)
  • [22]. Kapuscinski, T, Oszust, M, Wysocki, M, Warchol D. 2015. Recognition of hand gestures observed by depth cameras, International Journal of Advanced Robotic Systems, 12 (4), 36, pp. 1-15. (http://dx.doi.org/10.5772/60091.)
  • [23]. Ronchetti, F, Quiroga, F, Estrebou, CA, Lanzarini, LC, Rosete, A. 2016. LSA64: an Argentinian sign language dataset, CACIC 2016, Roma-Italy, pp. 1-10, 3-7. (http://dx.doi.org/10.48550/arXiv.2310.17429.)
  • [24]. Ronchetti, F. 2017. Thesis Overview: Dynamic Gesture Recognition and its Application to Sign Language, Journal of Computer Science and Technology, 17, pp. 1–10. (http://dx.doi.org/10.24215/16666038.17.e21.)
  • [25]. Chai, X, Wang, H, Chen, X. 2014. The devising large vocabulary of Chinese sign language database and baseline evaluations, Technical report VIPL-TR-14- SLR-001. Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, 2014. (http://dx.doi.org/10.11999/JEIT221051.)
  • [26]. Rusul, HH, Rasha, MH, Inaam, SA. 2023. Yolo Versions Architecture: Review, International Journal of Advances in Scientific Research and Engineering, Vol. 9, 11, pp.1-20. (https://doi.org/10.31695/IJASRE.2023.9.11.7.)
  • [27]. Terven, J, Diana-Margarita, C, Julio-Alejandro, R. 2023. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS, Machine Learning and Knowledge Extraction, Vol. 5, 4, 1680-1716. (https://doi.org/10.3390/make5040083.)
  • [28]. Ameen, C, Vadera, S. 2017. A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images. Expert Syst. 34 (3), e12197. (http://dx.doi.org/10.1111/exsy.12197.)
  • [29]. Beena, M. 2017. Automatic Sign Language Finger Spelling Using Convolution Neural Network: Analysis.
  • [30]. Masood, S, Thuwal, HC, Srivastava, A. 2018. American signlanguage character recognition using convolution neural network. Computing and Informatics. Springer Singapore, Singapore, pp. 403–412. (http://dx.doi.org/10.1007/978-3-319-16178-5_40.)
  • [31]. Kim, T, Keane, J, Wang, W, Tang, H, Riggle, J, Shakhnarovich, G, Brentari, D, Livescu, K. 2017. Lexicon-free fingerspelling recognition from video: Data, models, and signer adaptation. Comput. Speech Lang. 46, pp. 209–232. (http://dx.doi.org/10.1016/j.csl.2017.05.009.)
  • [32]. Taskiran, M, Killioglu, M, Kahraman, N. 2018. A real-time system for recognition of American Sign Language by using deep learning. In: 2018 41st International Conference on Telecommunications and Signal Processing. TSP, pp. 1–5. (http://dx.doi.org/10.1109/TSP.2018.8441304.)
  • [33]. Shi, B, Rio, AMD, Keane, J, Brentari, D, Shakhnarovich, G, Livescu, K. 2019. Fingerspelling recognition in the wild with iterative visual attention. In: 2019 IEEE/CVF International Conference on Computer Vision. ICCV, IEEE, (http://dx.doi.org/10.1109/iccv.2019.00550.)
  • [34]. Shi, B, Rio, AMD, Keane, J, Michaux, J, Brentari, D, Shakhnarovich, G, Livescu, K. 2018. American Sign Language fingerspelling recognition in the wild. In: 2018 IEEE Spoken Language Technology Workshop. SLT, IEEE, (http://dx.doi.org/10.1109/slt.2018.8639639.)
  • [35]. Warcho, D, Kapuski, T, Wysocki, M. 2019. Recognition of fingerspelling sequences in polish sign language using point clouds obtained from depth images. Sensors, 19 (5), 1078. (http://dx.doi.org/10.3390/s19051078.)
  • [36]. Raghuveera, T, Deepthi, R, Mangalashri, R, Akshaya, R. 2020. A depth-based Indian Sign Language recognition using Microsoft Kinect, Sadhana, 45-34, pp. 1-13. (http://dx.doi.org/10.1007/s12046-019-1250-6.)
  • [37]. Kwolek, B, Baczynski, W, Sako, S. 2021. Recognition of JSL fingerspelling using deep convolutional neural networks. Neurocomputing, 456, pp. 586–598. (http://dx.doi.org/10.1016/j.neucom.2021.03.133.)
  • [38]. Mannan, A, Abbasi, A, Javed, AR, Ahsan, A, Gadekallu, TR, Xin, Q. 2022. Hypertuned Deep Convolution Neural Network for Sing Language Recognition. Computational Intelligence and Neuroscience, pp. 1–10. (https://doi.org/10.1155/2022/1450822.)
  • [39]. Gadekallu, TR, Srivastava, G, Liyanage, M, Iyapparaja, M, Chowdhary, CL, Koppu, S, Maddikunta, PKR. 2022. Hand gesture recognition based on a Harris Hawks optimized convolution neural network. Comput. Electr. Eng. 100, 107836. (http://dx.doi.org/10.1016/j.compeleceng.2022.107836.)
  • [40]. Aliyev, S, Almisreb, AA, Turaev, S. 2022. Azerbaijani sign language recognition using machine learning approach. J. Phys. Conf. Ser. 2251 (1), 012007. (http://dx.doi.org/10.1088/1742-6596/2251/1/012007.)
  • [41]. Angona, TM. 2020. Automated Bangla sign language translation system for alphabets by means of MobileNet, TELKOMNIKA (Telecommunication Computing Electronics and Control), 18, pp. 1292–1301. (https://doi.org/10.12928/telkomnika. v18i3.15311.)
  • [42]. Talukder, D, Jahara, F, Barua, S, Haque, MM. 2021. OkkhorNama: BdSL image dataset for real time object detection algorithms, IEEE Region 10 Symposium, pp. 1–6. (https://doi.org/10.1109/TENSYMP52854.2021.9550907.)
  • [43]. Siddique, S, Islam, S, Neon, EE, Sabbir, T, Naheen, IT, Khan, R. 2023. Deep Learning-based Bangla Sign Language Detection with an Edge Device, Intelligent Systems with Applications, 18, pp. 200224. (https://doi.org/10.1016/j.iswa.2023.200224 30 March 2023.)
  • [44]. Özcan, T, Baştürk, A. 2021. ERUSLR a new Turkish sign language dataset and its recognition using hyper parameter optimization aided convolution neural network, Journal of Faculty Engineering Architecture of Gazi Universty, 36:1, pp. 527-542. (htpss://doi.org/ 10.17341/gazimmfd.746793.)
  • [45]. Miah, ASM, Shin, J, Hasan, MAH, Rahim, MA. 2022. BenSignNet: Bengali sign language alphabet recognition using concatenated segmentation and convolutional neural network, Applied Sciences, 12(8), pp. 3933. (https://doi.org/10.3390/ app12083933.)
  • [46]. Pacal, I, Alaftekin, M. 2023. CNN-Based approaches for automatic recognition of Turkish sign language, Journal of the Instute of Science Technology, 13(2), pp. 760-777. (https://doi.org/10.55525/tjst.1073116. )
  • [47]. Miah, ASM, Shin, J, Hasan, MAH, Rahim, MA. 2022. BenSignNet: Bengali sign language alphabet recognition using concatenated segmentation and convolutional neural network, Applied Sciences, 12(8), pp. 3933.
Toplam 47 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Elektrik Mühendisliği (Diğer)
Bölüm Araştırma Makalesi
Yazarlar

Abdil Karakan 0000-0003-1651-7568

Yüksel Oğuz 0000-0002-5233-151X

Gönderilme Tarihi 13 Şubat 2025
Kabul Tarihi 21 Aralık 2025
Yayımlanma Tarihi 30 Mart 2026
DOI https://doi.org/10.18466/cbayarfbe.1635817
IZ https://izlik.org/JA22BC49KN
Yayımlandığı Sayı Yıl 2026 Cilt: 22 Sayı: 1

Kaynak Göster

APA Karakan, A., & Oğuz, Y. (2026). Real-Time Word Detection in Turkish Sign Language with Deep Learning. Celal Bayar University Journal of Science, 22(1), 132-141. https://doi.org/10.18466/cbayarfbe.1635817
AMA 1.Karakan A, Oğuz Y. Real-Time Word Detection in Turkish Sign Language with Deep Learning. Celal Bayar University Journal of Science. 2026;22(1):132-141. doi:10.18466/cbayarfbe.1635817
Chicago Karakan, Abdil, ve Yüksel Oğuz. 2026. “Real-Time Word Detection in Turkish Sign Language with Deep Learning”. Celal Bayar University Journal of Science 22 (1): 132-41. https://doi.org/10.18466/cbayarfbe.1635817.
EndNote Karakan A, Oğuz Y (01 Mart 2026) Real-Time Word Detection in Turkish Sign Language with Deep Learning. Celal Bayar University Journal of Science 22 1 132–141.
IEEE [1]A. Karakan ve Y. Oğuz, “Real-Time Word Detection in Turkish Sign Language with Deep Learning”, Celal Bayar University Journal of Science, c. 22, sy 1, ss. 132–141, Mar. 2026, doi: 10.18466/cbayarfbe.1635817.
ISNAD Karakan, Abdil - Oğuz, Yüksel. “Real-Time Word Detection in Turkish Sign Language with Deep Learning”. Celal Bayar University Journal of Science 22/1 (01 Mart 2026): 132-141. https://doi.org/10.18466/cbayarfbe.1635817.
JAMA 1.Karakan A, Oğuz Y. Real-Time Word Detection in Turkish Sign Language with Deep Learning. Celal Bayar University Journal of Science. 2026;22:132–141.
MLA Karakan, Abdil, ve Yüksel Oğuz. “Real-Time Word Detection in Turkish Sign Language with Deep Learning”. Celal Bayar University Journal of Science, c. 22, sy 1, Mart 2026, ss. 132-41, doi:10.18466/cbayarfbe.1635817.
Vancouver 1.Abdil Karakan, Yüksel Oğuz. Real-Time Word Detection in Turkish Sign Language with Deep Learning. Celal Bayar University Journal of Science. 01 Mart 2026;22(1):132-41. doi:10.18466/cbayarfbe.1635817