Araştırma Makalesi
BibTex RIS Kaynak Göster

TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ

Yıl 2025, Cilt: 24 Sayı: 48, 438 - 464, 18.12.2025
https://doi.org/10.55071/ticaretfbd.1635443

Öz

Dijital medyanın yaygınlaşmasıyla görüntü ve video içeriklerinin analizi önem kazanmıştır. Ancak, Türkçe alt yazı sınıflandırması, dilin yapısal zorlukları ve sınırlı veri kümeleri nedeniyle büyük bir araştırma sorunu oluşturmaktadır. Bu sorunu ele almak için TasvirEt, Flickr30k ve MS COCO veri kümeleri birleştirilerek 114.566 görüntü ve 588.867 Türkçe alt yazı içeren ImCapTR veri kümesi oluşturulmuştur. Önerilen TRimCapS sisteminde, alt yazılar TF-IDF, CountVectorizer ve GloVe ile vektörleştirilmiş, K-Means ve Latent Dirichlet Allocation kullanılarak kategorize edilmiştir. Özellik seçimi bilgi kazancı, ki-kare, Fisher skoru, karşılıklı bilgi ve temel bileşenler analizi yöntemleriyle gerçekleştirilmiştir. Çeşitli makine öğrenimi ve derin öğrenme modelleriyle yapılan sınıflandırma deneylerinde, CountVectorizer ve BERT kombinasyonu %98,84 doğruluk oranı ile en iyi sonucu vermiştir. Bilgi kazancı ve temel bileşenler analizi, diğer yöntemlere göre daha yüksek performans göstermiştir. Bu çalışma, Türkçe alt yazı sınıflandırması konusunda en kapsamlı deney sonuçlarını sunan ve oluşturulan veri kümesini araştırmacıların erişimine açan ilk çalışmadır.

Kaynakça

  • Akın, A. A., & Akın, M. D. (2007). Zemberek, an open source NLP framework for Turkic languages. Structure, 10(2007), 1–5.
  • Andrearczyk, V., & Müller, H. (2018). Deep multimodal classification of image types in biomedical journal figures. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 9th International Conference of the CLEF Association, CLEF 2018 (3-14). Springer.
  • Anjoletto Ferreira, L., De Rizzo Meneghetti, D., & Santos, P. E. (2020). CAPTION: Correction by analyses, POS-tagging and interpretation of objects using only nouns. arXiv preprint arXiv:2010.00839.
  • Bharne, S., & Bhaladhare, P. (2024). Enhancing user profile authenticity through automatic image caption generation using a bootstrapping language–image pre-training model. Engineering Proceedings, 59(1), 182.
  • Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan), 993–1022.
  • Bro, R., & Smilde, A. K. (2014). Principal component analysis. Analytical Methods, 6(9), 2812–2831.
  • Budak, H. (2018). Özellik seçim yöntemleri ve yeni bir yaklaşım. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 22, 21–31.
  • Cao, Y., Li, W., Li, J., Yuan, Y., & Hershcovich, D. (2024). Exploring visual culture awareness in GPT-4V: A comprehensive probing. arXiv preprint arXiv:2402.06015.
  • Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (785–794).
  • Deng, X., Li, Y., Weng, J., & Zhang, J. (2019). Feature selection for text classification: A review. Multimedia Tools and Applications, 78(3), 3797–3816.
  • Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Ersoy, A., Yıldız, O. T., & Özer, S. (2023). ORTPiece: An ORT-based Turkish image captioning network based on transformers and WordPiece. In 2023 31st Signal Processing and Communications Applications Conference (SIU) (1-4).
  • Ertuğrul, M. A. C., & Omurca, S. İ. (2023). Generating image captions using deep neural networks. In 2023 8th International Conference on Computer Science and Engineering (UBMK) (271–275).
  • Ferreira, L. A., De Rizzo Meneghetti, D., Lopes, M., & Santos, P. E. (2022). CAPTION: Caption analysis with proposed terms, image of objects, and natural language processing. SN Computer Science, 3(5), 390.
  • Forman, G. (2003). An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research, 3(Mar), 1289–1305.
  • Gaspar, A., & Alexandre, L. A. (2019). A multimodal approach to image sentiment analysis. In Intelligent Data Engineering and Automated Learning – IDEAL 2019: 20th International Conference (302–309). Springer.
  • Golech, S. B., Karacan, S. B., Sönmez, E. B., & Ayral, H. (2022). A complete human verified Turkish caption dataset for MS COCO and performance evaluation with well-known image caption models trained against it. In 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) (1–6).
  • Gu, Q., Li, Z., & Han, J. (2012). Generalized Fisher score for feature selection. arXiv preprint arXiv:1202.3725.
  • Hu, H., Liu, J., Zhang, X., & Fang, M. (2023). An effective and adaptable K-means algorithm for big data cluster analysis. Pattern Recognition, 139, 109404.
  • Kafka, A. (2017). Caption aided action recognition using single images. Lehigh University
  • Kaur, P., & Kiesel, D. (2020). Combining image and caption analysis for classifying charts in biodiversity texts. In VISIGRAPP (3: IVAPP) (157–168).
  • Kraskov, A., Stögbauer, H., & Grassberger, P. (2004). Estimating mutual information. Physical Review E, 69(6), 066138.
  • Kuyu, M., Erdem, A., & Erdem, E. (2018). Image captioning in Turkish with subword units. In 2018 26th Signal Processing and Communications Applications Conference (SIU) (1–4).
  • Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft COCO: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference (740–755). Springer.
  • Loper, E., & Bird, S. (2002). NLTK: The natural language toolkit. arXiv preprint cs/0205028.
  • Mussabayev, R., Mladenovic, N., Jarboui, B., & Mussabayev, R. (2023). How to use K-means for big data clustering? Pattern Recognition, 137, 109269.
  • Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (1532–1543).
  • Peters, C. C., & Van Voorhis, W. R. (1940). Chi square.
  • Rafi, A. M., Rana, S., Kaur, R., Wu, Q. J., & Zadeh, P. M. (2020). Understanding global reaction to the recent outbreaks of COVID-19: Insights from Instagram data analysis. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (3413–3420).
  • Robertson, S. (2004). Understanding inverse document frequency: On theoretical arguments for IDF. Journal of Documentation, 60(5), 503–520.
  • Samet, N., Hiçsönmez, S., Duygulu, P., & Akbaş, E. (2017). Görüntü altyazılama için otomatik tercümeyle eğitim kümesi oluşturulabilir mi? In 25th Signal Processing and Communications Applications Conference (SIU).
  • Shekhar, R., Pezzelle, S., Klimovich, Y., Herbelot, A., Nabi, M., Sangineto, E., & Bernardi, R. (2017). FOIL it! Find one mismatch between image and language caption. arXiv preprint arXiv:1705.01359.
  • Sönmez, E. B., Yıldız, T., Yılmaz, B. D., & Demir, A. E. (2019). Türkçe dilinde görüntü altyazısı: Veritabanı ve model. Gazi Üniversitesi Mühendislik-Mimarlık Fakültesi Dergisi, 35(4), 2089–2100.
  • Syakur, M., Khotimah, B. K., Rochman, E., & Satoto, B. D. (2018). Integration K-means clustering method and elbow method for identification of the best customer profile cluster. In IOP Conference Series: Materials Science and Engineering (Vol. 336, 012017).
  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (1–9).
  • Turki, T., & Roy, S. S. (2022). Novel hate speech detection using word cloud visualization and ensemble learning coupled with count vectorizer. Applied Sciences, 12(13), 6611.
  • Ünal, M. E., Çıtamak, B., Yağcıoğlu, S., Erdem, A., Erdem, E., Cinbis, N. I., & Çakıcı, R. (2016). Tasviret: A benchmark dataset for automatic Turkish description generation from images. In 2016 24th Signal Processing and Communication Application Conference (SIU) (1977–1980).
  • Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2015). Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (3156–3164).
  • Yıldız, S., Memiş, A., & Çarlı, S. (2023). Automatic Turkish image captioning: The impact of deep machine translation. In 2023 8th International Conference on Computer Science and Engineering (UBMK) (414–419).
  • Yıldız, S., Memiş, A., & Varlı, S. (2023). TRCaptionNet: A novel and accurate deep Turkish image captioning model with vision transformer based image encoders and deep linguistic text decoders. Turkish Journal of Electrical Engineering and Computer Sciences, 31(6), 1079–1098.
  • Yu, X., Ahn, Y., & Jeong, J. (2021). High-level image classification by synergizing image captioning with BERT. In 2021 International Conference on Information and Communication Technology Convergence (ICTC) (1686–1690).

TRimCapS: A MACHINE LEARNING-BASED SYSTEM FOR CLASSIFYING IMAGE CAPTIONS IN THE TURKISH LANGUAGE

Yıl 2025, Cilt: 24 Sayı: 48, 438 - 464, 18.12.2025
https://doi.org/10.55071/ticaretfbd.1635443

Öz

With the widespread adoption of digital media, the analysis of image and video content has gained significance. However, Turkish subtitle classification presents a major research challenge due to the structural complexities of the language and the limited availability of datasets. To address this issue, the TasvirEt, Flickr30k, and MS COCO datasets were combined to create the ImCapTR dataset, which contains 114.566 images and 588.867 Turkish subtitles. In the proposed TRimCapS system, subtitles were vectorized using TF-IDF, CountVectorizer, and GloVe, and categorized using K-Means and Latent Dirichlet Allocation. Feature selection was performed using information gain, chi-square, Fisher score, mutual information, and principal component analysis. Classification experiments utilizing various machine learning and deep learning models demonstrated that the combination of CountVectorizer and BERT achieved the highest accuracy of 98.84%. Information gain and principal component analysis outperformed other feature selection methods. This study is the first to provide the most comprehensive experimental results on Turkish subtitle classification while making the constructed dataset publicly accessible to researchers.

Kaynakça

  • Akın, A. A., & Akın, M. D. (2007). Zemberek, an open source NLP framework for Turkic languages. Structure, 10(2007), 1–5.
  • Andrearczyk, V., & Müller, H. (2018). Deep multimodal classification of image types in biomedical journal figures. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 9th International Conference of the CLEF Association, CLEF 2018 (3-14). Springer.
  • Anjoletto Ferreira, L., De Rizzo Meneghetti, D., & Santos, P. E. (2020). CAPTION: Correction by analyses, POS-tagging and interpretation of objects using only nouns. arXiv preprint arXiv:2010.00839.
  • Bharne, S., & Bhaladhare, P. (2024). Enhancing user profile authenticity through automatic image caption generation using a bootstrapping language–image pre-training model. Engineering Proceedings, 59(1), 182.
  • Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan), 993–1022.
  • Bro, R., & Smilde, A. K. (2014). Principal component analysis. Analytical Methods, 6(9), 2812–2831.
  • Budak, H. (2018). Özellik seçim yöntemleri ve yeni bir yaklaşım. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 22, 21–31.
  • Cao, Y., Li, W., Li, J., Yuan, Y., & Hershcovich, D. (2024). Exploring visual culture awareness in GPT-4V: A comprehensive probing. arXiv preprint arXiv:2402.06015.
  • Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (785–794).
  • Deng, X., Li, Y., Weng, J., & Zhang, J. (2019). Feature selection for text classification: A review. Multimedia Tools and Applications, 78(3), 3797–3816.
  • Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Ersoy, A., Yıldız, O. T., & Özer, S. (2023). ORTPiece: An ORT-based Turkish image captioning network based on transformers and WordPiece. In 2023 31st Signal Processing and Communications Applications Conference (SIU) (1-4).
  • Ertuğrul, M. A. C., & Omurca, S. İ. (2023). Generating image captions using deep neural networks. In 2023 8th International Conference on Computer Science and Engineering (UBMK) (271–275).
  • Ferreira, L. A., De Rizzo Meneghetti, D., Lopes, M., & Santos, P. E. (2022). CAPTION: Caption analysis with proposed terms, image of objects, and natural language processing. SN Computer Science, 3(5), 390.
  • Forman, G. (2003). An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research, 3(Mar), 1289–1305.
  • Gaspar, A., & Alexandre, L. A. (2019). A multimodal approach to image sentiment analysis. In Intelligent Data Engineering and Automated Learning – IDEAL 2019: 20th International Conference (302–309). Springer.
  • Golech, S. B., Karacan, S. B., Sönmez, E. B., & Ayral, H. (2022). A complete human verified Turkish caption dataset for MS COCO and performance evaluation with well-known image caption models trained against it. In 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) (1–6).
  • Gu, Q., Li, Z., & Han, J. (2012). Generalized Fisher score for feature selection. arXiv preprint arXiv:1202.3725.
  • Hu, H., Liu, J., Zhang, X., & Fang, M. (2023). An effective and adaptable K-means algorithm for big data cluster analysis. Pattern Recognition, 139, 109404.
  • Kafka, A. (2017). Caption aided action recognition using single images. Lehigh University
  • Kaur, P., & Kiesel, D. (2020). Combining image and caption analysis for classifying charts in biodiversity texts. In VISIGRAPP (3: IVAPP) (157–168).
  • Kraskov, A., Stögbauer, H., & Grassberger, P. (2004). Estimating mutual information. Physical Review E, 69(6), 066138.
  • Kuyu, M., Erdem, A., & Erdem, E. (2018). Image captioning in Turkish with subword units. In 2018 26th Signal Processing and Communications Applications Conference (SIU) (1–4).
  • Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft COCO: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference (740–755). Springer.
  • Loper, E., & Bird, S. (2002). NLTK: The natural language toolkit. arXiv preprint cs/0205028.
  • Mussabayev, R., Mladenovic, N., Jarboui, B., & Mussabayev, R. (2023). How to use K-means for big data clustering? Pattern Recognition, 137, 109269.
  • Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (1532–1543).
  • Peters, C. C., & Van Voorhis, W. R. (1940). Chi square.
  • Rafi, A. M., Rana, S., Kaur, R., Wu, Q. J., & Zadeh, P. M. (2020). Understanding global reaction to the recent outbreaks of COVID-19: Insights from Instagram data analysis. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (3413–3420).
  • Robertson, S. (2004). Understanding inverse document frequency: On theoretical arguments for IDF. Journal of Documentation, 60(5), 503–520.
  • Samet, N., Hiçsönmez, S., Duygulu, P., & Akbaş, E. (2017). Görüntü altyazılama için otomatik tercümeyle eğitim kümesi oluşturulabilir mi? In 25th Signal Processing and Communications Applications Conference (SIU).
  • Shekhar, R., Pezzelle, S., Klimovich, Y., Herbelot, A., Nabi, M., Sangineto, E., & Bernardi, R. (2017). FOIL it! Find one mismatch between image and language caption. arXiv preprint arXiv:1705.01359.
  • Sönmez, E. B., Yıldız, T., Yılmaz, B. D., & Demir, A. E. (2019). Türkçe dilinde görüntü altyazısı: Veritabanı ve model. Gazi Üniversitesi Mühendislik-Mimarlık Fakültesi Dergisi, 35(4), 2089–2100.
  • Syakur, M., Khotimah, B. K., Rochman, E., & Satoto, B. D. (2018). Integration K-means clustering method and elbow method for identification of the best customer profile cluster. In IOP Conference Series: Materials Science and Engineering (Vol. 336, 012017).
  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (1–9).
  • Turki, T., & Roy, S. S. (2022). Novel hate speech detection using word cloud visualization and ensemble learning coupled with count vectorizer. Applied Sciences, 12(13), 6611.
  • Ünal, M. E., Çıtamak, B., Yağcıoğlu, S., Erdem, A., Erdem, E., Cinbis, N. I., & Çakıcı, R. (2016). Tasviret: A benchmark dataset for automatic Turkish description generation from images. In 2016 24th Signal Processing and Communication Application Conference (SIU) (1977–1980).
  • Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2015). Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (3156–3164).
  • Yıldız, S., Memiş, A., & Çarlı, S. (2023). Automatic Turkish image captioning: The impact of deep machine translation. In 2023 8th International Conference on Computer Science and Engineering (UBMK) (414–419).
  • Yıldız, S., Memiş, A., & Varlı, S. (2023). TRCaptionNet: A novel and accurate deep Turkish image captioning model with vision transformer based image encoders and deep linguistic text decoders. Turkish Journal of Electrical Engineering and Computer Sciences, 31(6), 1079–1098.
  • Yu, X., Ahn, Y., & Jeong, J. (2021). High-level image classification by synergizing image captioning with BERT. In 2021 International Conference on Information and Communication Technology Convergence (ICTC) (1686–1690).
Toplam 41 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Derin Öğrenme, Doğal Dil İşleme
Bölüm Araştırma Makalesi
Yazarlar

Merve Pınar 0000-0003-3041-6958

Esra Yılmaz 0000-0003-2411-4937

Zeki Çıplak 0000-0002-0086-3223

Ayşe Berna Altınel Girgin 0000-0001-5544-0925

Gönderilme Tarihi 8 Şubat 2025
Kabul Tarihi 11 Temmuz 2025
Erken Görünüm Tarihi 9 Aralık 2025
Yayımlanma Tarihi 18 Aralık 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 24 Sayı: 48

Kaynak Göster

APA Pınar, M., Yılmaz, E., Çıplak, Z., Altınel Girgin, A. B. (2025). TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ. İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi, 24(48), 438-464. https://doi.org/10.55071/ticaretfbd.1635443
AMA Pınar M, Yılmaz E, Çıplak Z, Altınel Girgin AB. TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ. İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi. Aralık 2025;24(48):438-464. doi:10.55071/ticaretfbd.1635443
Chicago Pınar, Merve, Esra Yılmaz, Zeki Çıplak, ve Ayşe Berna Altınel Girgin. “TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ”. İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi 24, sy. 48 (Aralık 2025): 438-64. https://doi.org/10.55071/ticaretfbd.1635443.
EndNote Pınar M, Yılmaz E, Çıplak Z, Altınel Girgin AB (01 Aralık 2025) TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ. İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi 24 48 438–464.
IEEE M. Pınar, E. Yılmaz, Z. Çıplak, ve A. B. Altınel Girgin, “TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ”, İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi, c. 24, sy. 48, ss. 438–464, 2025, doi: 10.55071/ticaretfbd.1635443.
ISNAD Pınar, Merve vd. “TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ”. İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi 24/48 (Aralık2025), 438-464. https://doi.org/10.55071/ticaretfbd.1635443.
JAMA Pınar M, Yılmaz E, Çıplak Z, Altınel Girgin AB. TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ. İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi. 2025;24:438–464.
MLA Pınar, Merve vd. “TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ”. İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi, c. 24, sy. 48, 2025, ss. 438-64, doi:10.55071/ticaretfbd.1635443.
Vancouver Pınar M, Yılmaz E, Çıplak Z, Altınel Girgin AB. TRimCapS: MAKİNE ÖĞRENMESİ İLE TÜRKÇE DİLİNDEKİ GÖRÜNTÜ ALT YAZILARINI SINIFLANDIRMA SİSTEMİ. İstanbul Ticaret Üniversitesi Fen Bilimleri Dergisi. 2025;24(48):438-64.