Görüntülerden Derin Öğrenmeye Dayalı Otomatik Metin Çıkarma: Bir Görüntü Yakalama Sistemi
Yıl 2022,
Cilt: 34 Sayı: 2, 829 - 837, 30.09.2022
Zeynep Karaca
,
Bihter Daş
Öz
Bilgisayarlı görme ve doğal dil işlemenin çalışma alanlarından biri olan görüntüden metin üretme (image capturing), doğal bir dil kullanarak görüntü içeriğini otomatik olarak tanımlama görevidir. Bu çalışmada, MS COCO veri seti üzerinde İngilizce dili için encoder-decoder tekniğine dayalı bir otomatik altyazı oluşturma yaklaşımı önerilmiştir. Önerilen yaklaşımda, görüntü özniteliklerini çıkarmak için encoder olarak Evrişimli Sinir Ağı (CNN) mimarisi ve görüntülerden altyazı oluşturmak için bir decoder olarak Tekrarlayan Sinir Ağı (RNN) mimarisi kullanılmıştır. Önerilen yaklaşımın performansı BLEU, METEOR ve ROUGE_L değerlendirme kriterleri kullanılarak değerlendirilmiş ve her bir görüntüden 5 cümle elde edilmiştir. Deneysel sonuçlar, modelin görüntülerdeki nesneleri doğru bir şekilde algılamada tatmin edici olduğunu göstermektedir.
Kaynakça
- [1] C. P. Chaudhari ve S. Devane, “Capturing Semantic Knowledge In Object Localization In Captioning Images”, içinde 2021 International Conference on Communication information and Computing Technology (ICCICT), Haz. 2021, ss. 1-4. doi: 10.1109/ICCICT50803.2021.9510175.
- [2] A. U. Dey, S. K. Ghosh, E. Valveny, and G. Harit, “Beyond visual semantics: Exploring the role of scene text in image understanding,” Pattern Recognition Letters, vol. 149, pp. 164–171, Sep. 2021, doi: 10.1016/j.patrec.2021.06.011.
- [3] R. A. Davis, Z. Xiao, and X. Qi, “Capturing semantic relationship among images in clusters for efficient content-based image retrieval,” in 2012 19th IEEE International Conference on Image Processing, Sep. 2012, pp. 1953–1956. doi: 10.1109/ICIP.2012.6467269.
- [4] C. Bai, A. Zheng, Y. Huang, X. Pan, ve N. Chen, “Boosting convolutional image captioning with semantic content and visual relationship”, Displays, c. 70, s. 102069, Ara. 2021, doi: 10.1016/j.displa.2021.102069.
- [5] C. Wang, Y. Shen, and L. Ji, “Geometry Attention Transformer with position-aware LSTMs for image captioning,” Expert Systems with Applications, vol. 201, p. 117174, Sep. 2022, doi: 10.1016/j.eswa.2022.117174.
- [6] S. Wang et al., “Multi-label semantic feature fusion for remote sensing image captioning,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 184, pp. 1–18, Feb. 2022, doi: 10.1016/j.isprsjprs.2021.11.020.
- [7] C. Wu, J. Wu, H. Cao, Y. Wei, and L. Wang, “Dual-View Semantic Inference Network for image-text matching,” Neurocomputing, vol. 426, pp. 47–57, Feb. 2021, doi:10.1016/j.neucom.2020.09.079.
- [8] Y. Wang, Y. Xie, J. Zeng, H. Wang, L. Fan, and Y. Song, “Cross-modal fusion for multi-label image classification with attention mechanism,” Computers and Electrical Engineering, vol. 101, p. 108002, Jul. 2022, doi: 10.1016/j.compeleceng.2022.108002.
- [9] S. Zhao, L. Li, and H. Peng, “Aligned visual semantic scene graph for image captioning,” Displays, vol. 74, p. 102210, Sep. 2022, doi: 10.1016/j.displa.2022.102210.
- [10] E. Battini Sonmez, T. Yildiz, B. D. Yilmaz, and A. E. Demir, “Türkçe dilinde görüntü altyazısı: veritabanı ve model,” Gazi Üniversitesi Mühendislik-Mimarlık Fakültesi Dergisi, Jul. 2020, doi: 10.17341/gazimmfd.597089.
- [11] Y. Wu, W. Liu, and S. Wan, “Multiple attention encoded cascade R-CNN for scene text detection,” Journal of Visual Communication and Image Representation, vol. 80, p. 103261, Oct. 2021, doi: 10.1016/j.jvcir.2021.103261.
- [12] M. Mustafa, “An energy efficient and improved language translator with cnn based deep encoder and decoder,” Materials Today: Proceedings, Feb. 2021, doi: 10.1016/j.matpr.2020.12.1204.
- [13] J. Chen and H. Zhuge, “Extractive summarization of documents with images based on multi-modal RNN,” Future Generation Computer Systems, vol. 99, pp. 186–196, Oct. 2019, doi: 10.1016/j.future.2019.04.045.
- [14] H. Zhan, S. Lyu, Y. Lu, and U. Pal, “DenseNet-CTC: An end-to-end RNN-free architecture for context-free string recognition,” Computer Vision and Image Understanding, vol. 204, p. 103168, Mar. 2021, doi: 10.1016/j.cviu.2021.103168.
- [15] C. Bai, A. Zheng, Y. Huang, X. Pan, and N. Chen, “Boosting convolutional image captioning with semantic content and visual relationship,” Displays, vol. 70, p. 102069, Dec. 2021, doi: 10.1016/j.displa.2021.102069.
- [16] M. Kılıçkaya, E. Erdem, A. Erdem, N. İ. Cinbiş, and R. Çakıcı, “Data-driven image captioning with meta-class based retrieval,” in 2014 22nd Signal Processing and Communications Applications Conference (SIU), Apr. 2014, pp. 1922–1925. doi: 10.1109/SIU.2014.6830631.
- [17] Y. Lu, C. Guo, X. Dai, and F.-Y. Wang, “Data-efficient image captioning of fine art paintings via virtual-real semantic alignment training,” Neurocomputing, vol. 490, pp. 163–180, Jun. 2022, doi: 10.1016/j.neucom.2022.01.068.
- [18] Z. Yang, P. Wang, T. Chu, and J. Yang, “Human-Centric Image Captioning,” Pattern Recognition, vol. 126, p. 108545, Jun. 2022, doi: 10.1016/j.patcog.2022.108545.
- [19] V. Agrawal, S. Dhekane, N. Tuniya, and V. Vyas, “Image Caption Generator Using Attention Mechanism,” in 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), Jul. 2021, pp. 1–6. doi: 10.1109/ICCCNT51525.2021.9579967.
- [20] J. Li, N. Xu, W. Nie, ve S. Zhang, “Image Captioning with multi-level similarity-guided semantic matching”, Visual Informatics, c. 5, sy 4, ss. 41-48, Ara. 2021, doi: 10.1016/j.visinf.2021.11.003.
- [21] A. Shokraei Fard, D. C. Reutens, and V. Vegh, “From CNNs to GANs for cross-modality medical image estimation,” Computers in Biology and Medicine, vol. 146, p. 105556, Jul. 2022, doi: 10.1016/j.compbiomed.2022.105556.
- [22] E. Paul and S. R.s., “Modified convolutional neural network with pseudo-CNN for removing nonlinear noise in digital images,” Displays, vol. 74, p. 102258, Sep. 2022, doi: 10.1016/j.displa.2022.102258.
- [23] L.-Y. Ye, X.-Y. Miao, W.-S. Cai, and W.-J. Xu, “Medical image diagnosis of prostate tumor based on PSP-Net+VGG16 deep learning network,” Computer Methods and Programs in Biomedicine, vol. 221, p. 106770, Jun. 2022, doi: 10.1016/j.cmpb.2022.106770.
- [24] A. A. Pravitasari, N. Iriawan, U. S. Nuraini, and D. A. Rasyid, “12 - On comparing optimizer of UNet-VGG16 architecture for brain tumor image segmentation,” in Brain Tumor MRI Image Segmentation Using Deep Learning Techniques, J. Chaki, Ed. Academic Press, 2022, pp. 197–215. doi: 10.1016/B978-0-323-91171-9.00004-1.
- [25] J. Brownlee, “A Gentle Introduction to the Rectified Linear Unit (ReLU),” Machine Learning Mastery, Jan. 08, 2019. https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/ (accessed Jun. 29, 2022).
- [26] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a Method for Automatic Evaluation of Machine Translation,” in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, Jul. 2002, pp. 311–318. doi: 10.3115/1073083.1073135.
- [27] Banerjee, S. and Lavie, A. (2005) "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments" in Proceedings of Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at the 43rd Annual Meeting of the Association of Computational Linguistics (ACL-2005), Ann Arbor, Michigan, June 2005
- [28] D. Raj, “Metrics for NLG evaluation,” Explorations in Language and Learning, Sep. 16, 2017. https://medium.com/explorations-in-language-and-learning/metrics-for-nlg-evaluation-c89b6a781054 (accessed Jun. 16, 2022).