Araştırma Makalesi
BibTex RIS Kaynak Göster

Görme engelliler için nesne tanıma ve resim altyazısını derin öğrenme teknikleriyle entegre eden verimli bir aktivite tanıma modeli

Yıl 2024, , 2177 - 2186, 20.05.2024
https://doi.org/10.17341/gazimmfd.1245400

Öz

Bir görüntünün içeriğini otomatik olarak tanımlamak, bilgisayarla görmeyi ve doğal dil işlemeyi birbirine bağlayan yapay zekadaki temel bir görevdir. Bu çalışmada, bilgisayarla görü ve makine çevirisindeki son gelişmeleri birleştiren ve bir görüntüyü tanımlayan doğal cümleler oluşturmak için derin ve tekrarlayan bir mimariye dayalı üretken bir model sunulmuştur. Oluşturulan bu model ile görüntülerden elde edilen metinler, ses dosyası formatına dönüştürülebilmekte ve görme engelli insanlar için kişinin etrafında bulunan nesnelerin aktivitesi tanımlanabilmektedir. Bu amaçla, ilk olarak, belirli bir görüntüdeki bir veya daha fazla nesnenin varlığını, konumunu ve türünü tanımlayan YOLO modeliyle görüntüler üzerinde nesne tanıma işlemi gerçekleştirilmiştir. Sonrasında, uzun kısa dönem hafıza ağları (LSTM) eğitim görüntüsü verilen hedef açıklama cümlesinin olasılığını en üst düzeye çıkarmak için eğitilmiştir. Böylece, ilgili görüntü içerisinde yer alan aktiviteler, açıklama olarak metin biçimine dönüştürülmüştür. Metin biçimine dönüştürülen aktiviteler, Google metin okuma platformundan faydalanılarak aktiviteyi tanımlayan ses dosyaları elde edilmiştir. Önerilen modelin etkinliğini göstermek amacıyla dört farklı özellik enjeksiyon mimarisi değerlendirilerek Flickr8K, Flickr30K ve MSCOCO veri kümeleri kullanılmıştır. Deney sonuçları, önerdiğimiz modelin görme engelli bireyler için aktivite tanımlamayı sesli olarak ifade etmede başarılı olduğunu göstermiştir.

Kaynakça

  • 1. Hossain M.Z., Sohel F., Shiratuddin M.F., Laga, H., A comprehensive survey of deep learning for image captioning, ACM Computing Surveys 51 (6), 1-36, 2019.
  • 2. Yao T., Pan Y., Li Y., Qiu Z., Mei T., Boosting image captioning with attributes, IEEE International Conference on Computer Vision, Venice, Italia, 4894-4902, 22-29 Ekim, 2017.
  • 3. You Q., Jin H., Wang Z., Fang C., Luo J., Image captioning with semantic attention, IEEE Conference on Computer Vision And Pattern Recognition, Las Vegas, USA, 4651-4659, 26 Haziran-1 Temmuz, 2016.
  • 4. Pan J.Y., Yang H.J., Duygulu P., Faloutsos C., Automatic image captioning, IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 1987-1990, 27-30 Haziran, 2004.
  • 5. O'Shea K. ve Nash R. An introduction to convolutional neural networks. https://arxiv.org/abs/1511.08458. Aralık 2, 2015. Temmuz 30, 2019.
  • 6. Medsker L.R. ve Jain L.C., Recurrent neural networks, Design and Applications, 5, 64-67, 2001.
  • 7. Hochreiter S. ve Schmidhuber J., Long short-term memory, Neural Computation, 9 (8), 1735-1780, 1997. 8. Montavon G., Samek W., Müller K.R., Methods for interpreting and understanding deep neural networks, Digital Signal Processing, 73, 1-15, 2018.
  • 9. Guo T., Dong J., Li H., Gao Y., Simple convolutional neural network on image classification. IEEE International Conference on Big Data Analysis, Beijing, China, 721-724, 10-12 Mart, 2017.
  • 10. Ouyang X., Zhou P., Li C.H., Liu L., Sentiment analysis using convolutional neural network, IEEE International Conference on Computer and Information Technology, Dhaka, Bangladesh, 2359-2364, 21-23 Aralık, 2015.
  • 11. Yang J., Nguyen M.N., San P.P., Li X.L., Krishnaswamy S., Deep convolutional neural networks on multichannel time series for human activity recognition. International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 3995-4001, 25-31 Temmuz, 2015.
  • 12. Salamon J. ve Bello J.P., Deep convolutional neural networks and data augmentation for environmental sound classification, IEEE Signal Processing Letters, 24 (3), 279-283, 2017.
  • 13. Eyben F., Petridis S., Schuller B., Tzimiropoulos G., Zafeiriou S., Pantic M., Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks, IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic, 5844-5847, 22-27 Mayıs, 2011.
  • 14. Khataei Maragheh H., Gharehchopogh F.S., Majidzadeh K., Sangar A.B., A new hybrid based on long short-term memory network with spotted hyena optimization algorithm for multi-label text classification. Mathematics 10 (3), 1-24, 2022.
  • 15. Yang Z., Zhang Y., Rehman S., Huang Y., Image captioning with object detection and localization, International Conference on Image and Graphics, Shanghai, China, 109-118, 13-15 Eylül, 2017.
  • 16. Aneja J., Deshpande A., Schwing A.G., Convolutional image captioning, IEEE Conference on Computer Vision and Pattern Recognition, Utah, USA, 5561-5570, 18-22 Haziran, 2018.
  • 17. Redmon J., Divvala S., Girshick R., Farhadi A., You only look once: Unified, real-time object detection, IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 779-788, 26 Haziran-1 Temmuz, 2016.
  • 18. Chun P.J., Yamane T., Maemura Y., A deep learning‐based image captioning method to automatically generate comprehensive explanations of bridge damage. Computer‐Aided Civil and Infrastructure Engineering, 37 (11), 1387-1401, 2022.
  • 19. Wang Y., Xiao B., Bouferguene A., Al-Hussein M., Li H., Vision-based method for semantic information extraction in construction by integrating deep learning object detection and image captioning, Advanced Engineering Informatics, 53, 1-13, 2022.
  • 20. Al-Malla M.A., Jafar A., Ghneim N., Image captioning model using attention and object features to mimic human image understanding, Journal of Big Data, 9 (1), 1-16, 2022.
  • 21. Bhalekar M. ve Bedekar M., D-CNN: A New model for generating image captions with text extraction using deep learning for visually challenged individuals, Engineering, Technology & Applied Science Research 12 (2), 8366-8373, 2022.
  • 22. Herdade S., Kappeler A., Boakye K., Soares J., Image captioning: Transforming objects into words. Neural International Conference on Neural Information Processing Systems, Vancouver, Canada, 11137-11147, 8-14 Aralık, 2019.
  • 23. Feng Y., Ma L., Liu W., Luo J., Unsupervised image captioning. IEEE/CVF Conference on Computer Vision and Pattern Recognition, California, USA, 4125-4134, 15-20 Haziran, 2019.
  • 24. Huang L., Wang W., Chen J., Wei X.Y., Attention on attention for image captioning. IEEE/CVF International Conference on Computer Vision, Seoul, South Korea, 4634-4643, 27 Ekim-2 Kasım, 2019.
  • 25. Staniūtė R. ve Šešok D., A systematic literature review on image captioning, Applied Sciences 9(10), 1-20, 2019.
  • 26. Devlin J., Cheng H., Fang H., Gupta S., Deng L., He X., Mitchell M., Language models for image captioning: The quirks and what works. https://arxiv.org/abs/1505.01809. Mayıs 7, 2015.
  • 27. Nina O. ve Rodriguez A., Simplified LSTM unit and search space probability exploration for image description, IEEE International Conference on Information, Communications and Signal Processing, Singapore, 1-5, 2-4 Aralık, 2015.
  • 28. Liu S., Zhu Z., Ye N., Guadarrama S., Murphy K., Improved image captioning via policy gradient optimization of spider, IEEE International Conference on Computer Vision, Venice, Italia, 873-881, 27-29 Ekim, 2017.
  • 29. Mao J., Wei X., Yang Y., Wang J., Huang Z., Yuille, A.L., Learning like a child: Fast novel visual concept learning from sentence descriptions of images, IEEE International Conference on Computer Vision, Las Condes, Şili, 2533-2541, 11-18 Aralık, 2015.
  • 30. Sak H., Senior A., Beaufays F., Long short-term memory recurrent neural network architectures for large scale acoustic modeling, Annual Conference of the International Speech Communication Association, Singapore, 338-342, 14-18 Eylül, 2014.
  • 31. Gültekin I., Artuner H., Turkish dialect recognition in terms of prosodic by long short-term memory neural networks, Journal of the Faculty of Engineering and Architecture of Gazi University, 35 (1), 213-224, 2020.
  • 32. Kilimci Z.H., Financial sentiment analysis with Deep Ensemble Models (DEMs) for stock market prediction, Journal of the Faculty of Engineering and Architecture of Gazi University, 35 (2), 635-650, 2020.
  • 33. Altun S. ve Alkan A., LSTM-based deep learning application in brain tumor detection using MR spectroscopy, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (2), 1193-1202, 2022.
  • 34. Gökdemir A., ve Çalhan A., Deep learning and machine learning based anomaly detection in internet of things environments, Journal of the Faculty of Engineering and Architecture of Gazi University, 37 (4), 1945-1956, 2022.
  • 35. Utku A., Using network traffic analysis deep learning based Android malware detection, Journal of the Faculty of Engineering and Architecture of Gazi University, 37 (4), 1823-1838, 2022.
  • 36. Akalın F., Yumuşak N., Classification of ALL, AML and MLL leukaemia types on microarray dataset using LSTM neural network approach, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (3), 1299-1306, 2023.
  • 37. Dölek İ., Kurt A., Ottoman Optical Character Recognition with deep neural networks, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (4), 2579-2594, 2023.
  • 38. Kantar O., Kilimci Z.H., Deep learning based hybrid gold index (XAU/USD) direction forecast model, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (2), 1117-1128, 2023.
  • 39. Erol B., İnkaya, T., Long short-term memory network based deep transfer learning approach for sales forecasting, Journal of the Faculty of Engineering and Architecture of Gazi University, 39 (1), 191-202, 2024.
  • 40. Hodosh M., Young P., Hockenmaier J., Framing image description as a ranking task: Data, models and evaluation metrics, Journal of Artificial Intelligence Research, 47, 853-899, 2013.
  • 41. Plummer B.A., Wang L., Cervantes C.M., Caicedo J.C., Hockenmaier J., Lazebnik S., Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models, IEEE International Conference on Computer Vision, Las Condes, Şili, 2641-2649, 2015.
  • 42. Lin T.Y., Maire M., Belongie S., Hays J., Perona P., Ramanan D., Dollar P., Zitnick C.L., (2014). Microsoft coco: Common objects in context, European Conference on Computer Vision, Zurich, Switzerland, 740-755, 6-12 September, 2014.
  • 43. Tanti M., Gatt A., Camilleri K.P., Where to put the image in an image caption generator, Natural Language Engineering, 24 (3), 467-489, 2018.
  • 44. Mulyanto E., Setiawan E.I., Yuniarno E.M., Purnomo M.H., Automatic ındonesian ımage caption generation using CNN-LSTM model and FEEH-ID dataset, IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Tianjin, China, 1-5, 14-16 Haziran, 2019.
  • 45. Suresh K.R., Jarapala A., Sudeep P.V., Image captioning encoder–decoder models using CNN-RNN architectures: A comparative study, Circuits, Systems, and Signal Processing, 41 (10), 5719-5742, 2022.
  • 46. Martin A.D., Ahmadzade E., Moon I., Privacy-preserving image captioning with deep learning and double random phase encoding, Mathematics 10 (16), 1-14, 2022.
  • 47. Nugraha A.A. ve Arifianto A., Generating image description on Indonesian language using convolutional neural network and gated recurrent unit, International Conference on Information and Communication Technology, Kuala Lumpur, Malaysia, 1-6, 24-26 Temmuz, 2019.
  • 48. Keskin R., Çaylı Ö., Moral Ö.T., Kılıç V., Aytuğ O., A benchmark for feature-injection architectures in image captioning, Avrupa Bilim ve Teknoloji Dergisi, 31, 461-468, 2021.
  • 49. You Q., Jin H., Luo J. Image captioning at will: A versatile scheme for effectively injecting sentiments into image descriptions. https://arxiv.org/abs/1801.10121. Ocak 30, 2018.
Yıl 2024, , 2177 - 2186, 20.05.2024
https://doi.org/10.17341/gazimmfd.1245400

Öz

Kaynakça

  • 1. Hossain M.Z., Sohel F., Shiratuddin M.F., Laga, H., A comprehensive survey of deep learning for image captioning, ACM Computing Surveys 51 (6), 1-36, 2019.
  • 2. Yao T., Pan Y., Li Y., Qiu Z., Mei T., Boosting image captioning with attributes, IEEE International Conference on Computer Vision, Venice, Italia, 4894-4902, 22-29 Ekim, 2017.
  • 3. You Q., Jin H., Wang Z., Fang C., Luo J., Image captioning with semantic attention, IEEE Conference on Computer Vision And Pattern Recognition, Las Vegas, USA, 4651-4659, 26 Haziran-1 Temmuz, 2016.
  • 4. Pan J.Y., Yang H.J., Duygulu P., Faloutsos C., Automatic image captioning, IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 1987-1990, 27-30 Haziran, 2004.
  • 5. O'Shea K. ve Nash R. An introduction to convolutional neural networks. https://arxiv.org/abs/1511.08458. Aralık 2, 2015. Temmuz 30, 2019.
  • 6. Medsker L.R. ve Jain L.C., Recurrent neural networks, Design and Applications, 5, 64-67, 2001.
  • 7. Hochreiter S. ve Schmidhuber J., Long short-term memory, Neural Computation, 9 (8), 1735-1780, 1997. 8. Montavon G., Samek W., Müller K.R., Methods for interpreting and understanding deep neural networks, Digital Signal Processing, 73, 1-15, 2018.
  • 9. Guo T., Dong J., Li H., Gao Y., Simple convolutional neural network on image classification. IEEE International Conference on Big Data Analysis, Beijing, China, 721-724, 10-12 Mart, 2017.
  • 10. Ouyang X., Zhou P., Li C.H., Liu L., Sentiment analysis using convolutional neural network, IEEE International Conference on Computer and Information Technology, Dhaka, Bangladesh, 2359-2364, 21-23 Aralık, 2015.
  • 11. Yang J., Nguyen M.N., San P.P., Li X.L., Krishnaswamy S., Deep convolutional neural networks on multichannel time series for human activity recognition. International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 3995-4001, 25-31 Temmuz, 2015.
  • 12. Salamon J. ve Bello J.P., Deep convolutional neural networks and data augmentation for environmental sound classification, IEEE Signal Processing Letters, 24 (3), 279-283, 2017.
  • 13. Eyben F., Petridis S., Schuller B., Tzimiropoulos G., Zafeiriou S., Pantic M., Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks, IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic, 5844-5847, 22-27 Mayıs, 2011.
  • 14. Khataei Maragheh H., Gharehchopogh F.S., Majidzadeh K., Sangar A.B., A new hybrid based on long short-term memory network with spotted hyena optimization algorithm for multi-label text classification. Mathematics 10 (3), 1-24, 2022.
  • 15. Yang Z., Zhang Y., Rehman S., Huang Y., Image captioning with object detection and localization, International Conference on Image and Graphics, Shanghai, China, 109-118, 13-15 Eylül, 2017.
  • 16. Aneja J., Deshpande A., Schwing A.G., Convolutional image captioning, IEEE Conference on Computer Vision and Pattern Recognition, Utah, USA, 5561-5570, 18-22 Haziran, 2018.
  • 17. Redmon J., Divvala S., Girshick R., Farhadi A., You only look once: Unified, real-time object detection, IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 779-788, 26 Haziran-1 Temmuz, 2016.
  • 18. Chun P.J., Yamane T., Maemura Y., A deep learning‐based image captioning method to automatically generate comprehensive explanations of bridge damage. Computer‐Aided Civil and Infrastructure Engineering, 37 (11), 1387-1401, 2022.
  • 19. Wang Y., Xiao B., Bouferguene A., Al-Hussein M., Li H., Vision-based method for semantic information extraction in construction by integrating deep learning object detection and image captioning, Advanced Engineering Informatics, 53, 1-13, 2022.
  • 20. Al-Malla M.A., Jafar A., Ghneim N., Image captioning model using attention and object features to mimic human image understanding, Journal of Big Data, 9 (1), 1-16, 2022.
  • 21. Bhalekar M. ve Bedekar M., D-CNN: A New model for generating image captions with text extraction using deep learning for visually challenged individuals, Engineering, Technology & Applied Science Research 12 (2), 8366-8373, 2022.
  • 22. Herdade S., Kappeler A., Boakye K., Soares J., Image captioning: Transforming objects into words. Neural International Conference on Neural Information Processing Systems, Vancouver, Canada, 11137-11147, 8-14 Aralık, 2019.
  • 23. Feng Y., Ma L., Liu W., Luo J., Unsupervised image captioning. IEEE/CVF Conference on Computer Vision and Pattern Recognition, California, USA, 4125-4134, 15-20 Haziran, 2019.
  • 24. Huang L., Wang W., Chen J., Wei X.Y., Attention on attention for image captioning. IEEE/CVF International Conference on Computer Vision, Seoul, South Korea, 4634-4643, 27 Ekim-2 Kasım, 2019.
  • 25. Staniūtė R. ve Šešok D., A systematic literature review on image captioning, Applied Sciences 9(10), 1-20, 2019.
  • 26. Devlin J., Cheng H., Fang H., Gupta S., Deng L., He X., Mitchell M., Language models for image captioning: The quirks and what works. https://arxiv.org/abs/1505.01809. Mayıs 7, 2015.
  • 27. Nina O. ve Rodriguez A., Simplified LSTM unit and search space probability exploration for image description, IEEE International Conference on Information, Communications and Signal Processing, Singapore, 1-5, 2-4 Aralık, 2015.
  • 28. Liu S., Zhu Z., Ye N., Guadarrama S., Murphy K., Improved image captioning via policy gradient optimization of spider, IEEE International Conference on Computer Vision, Venice, Italia, 873-881, 27-29 Ekim, 2017.
  • 29. Mao J., Wei X., Yang Y., Wang J., Huang Z., Yuille, A.L., Learning like a child: Fast novel visual concept learning from sentence descriptions of images, IEEE International Conference on Computer Vision, Las Condes, Şili, 2533-2541, 11-18 Aralık, 2015.
  • 30. Sak H., Senior A., Beaufays F., Long short-term memory recurrent neural network architectures for large scale acoustic modeling, Annual Conference of the International Speech Communication Association, Singapore, 338-342, 14-18 Eylül, 2014.
  • 31. Gültekin I., Artuner H., Turkish dialect recognition in terms of prosodic by long short-term memory neural networks, Journal of the Faculty of Engineering and Architecture of Gazi University, 35 (1), 213-224, 2020.
  • 32. Kilimci Z.H., Financial sentiment analysis with Deep Ensemble Models (DEMs) for stock market prediction, Journal of the Faculty of Engineering and Architecture of Gazi University, 35 (2), 635-650, 2020.
  • 33. Altun S. ve Alkan A., LSTM-based deep learning application in brain tumor detection using MR spectroscopy, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (2), 1193-1202, 2022.
  • 34. Gökdemir A., ve Çalhan A., Deep learning and machine learning based anomaly detection in internet of things environments, Journal of the Faculty of Engineering and Architecture of Gazi University, 37 (4), 1945-1956, 2022.
  • 35. Utku A., Using network traffic analysis deep learning based Android malware detection, Journal of the Faculty of Engineering and Architecture of Gazi University, 37 (4), 1823-1838, 2022.
  • 36. Akalın F., Yumuşak N., Classification of ALL, AML and MLL leukaemia types on microarray dataset using LSTM neural network approach, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (3), 1299-1306, 2023.
  • 37. Dölek İ., Kurt A., Ottoman Optical Character Recognition with deep neural networks, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (4), 2579-2594, 2023.
  • 38. Kantar O., Kilimci Z.H., Deep learning based hybrid gold index (XAU/USD) direction forecast model, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (2), 1117-1128, 2023.
  • 39. Erol B., İnkaya, T., Long short-term memory network based deep transfer learning approach for sales forecasting, Journal of the Faculty of Engineering and Architecture of Gazi University, 39 (1), 191-202, 2024.
  • 40. Hodosh M., Young P., Hockenmaier J., Framing image description as a ranking task: Data, models and evaluation metrics, Journal of Artificial Intelligence Research, 47, 853-899, 2013.
  • 41. Plummer B.A., Wang L., Cervantes C.M., Caicedo J.C., Hockenmaier J., Lazebnik S., Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models, IEEE International Conference on Computer Vision, Las Condes, Şili, 2641-2649, 2015.
  • 42. Lin T.Y., Maire M., Belongie S., Hays J., Perona P., Ramanan D., Dollar P., Zitnick C.L., (2014). Microsoft coco: Common objects in context, European Conference on Computer Vision, Zurich, Switzerland, 740-755, 6-12 September, 2014.
  • 43. Tanti M., Gatt A., Camilleri K.P., Where to put the image in an image caption generator, Natural Language Engineering, 24 (3), 467-489, 2018.
  • 44. Mulyanto E., Setiawan E.I., Yuniarno E.M., Purnomo M.H., Automatic ındonesian ımage caption generation using CNN-LSTM model and FEEH-ID dataset, IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Tianjin, China, 1-5, 14-16 Haziran, 2019.
  • 45. Suresh K.R., Jarapala A., Sudeep P.V., Image captioning encoder–decoder models using CNN-RNN architectures: A comparative study, Circuits, Systems, and Signal Processing, 41 (10), 5719-5742, 2022.
  • 46. Martin A.D., Ahmadzade E., Moon I., Privacy-preserving image captioning with deep learning and double random phase encoding, Mathematics 10 (16), 1-14, 2022.
  • 47. Nugraha A.A. ve Arifianto A., Generating image description on Indonesian language using convolutional neural network and gated recurrent unit, International Conference on Information and Communication Technology, Kuala Lumpur, Malaysia, 1-6, 24-26 Temmuz, 2019.
  • 48. Keskin R., Çaylı Ö., Moral Ö.T., Kılıç V., Aytuğ O., A benchmark for feature-injection architectures in image captioning, Avrupa Bilim ve Teknoloji Dergisi, 31, 461-468, 2021.
  • 49. You Q., Jin H., Luo J. Image captioning at will: A versatile scheme for effectively injecting sentiments into image descriptions. https://arxiv.org/abs/1801.10121. Ocak 30, 2018.
Toplam 48 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Zeynep Hilal Kilimci 0000-0003-1497-305X

Ayhan Küçükmanisa 0000-0002-1886-1250

Erken Görünüm Tarihi 17 Mayıs 2024
Yayımlanma Tarihi 20 Mayıs 2024
Gönderilme Tarihi 31 Ocak 2023
Kabul Tarihi 17 Eylül 2023
Yayımlandığı Sayı Yıl 2024

Kaynak Göster

APA Kilimci, Z. H., & Küçükmanisa, A. (2024). Görme engelliler için nesne tanıma ve resim altyazısını derin öğrenme teknikleriyle entegre eden verimli bir aktivite tanıma modeli. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi, 39(4), 2177-2186. https://doi.org/10.17341/gazimmfd.1245400
AMA Kilimci ZH, Küçükmanisa A. Görme engelliler için nesne tanıma ve resim altyazısını derin öğrenme teknikleriyle entegre eden verimli bir aktivite tanıma modeli. GUMMFD. Mayıs 2024;39(4):2177-2186. doi:10.17341/gazimmfd.1245400
Chicago Kilimci, Zeynep Hilal, ve Ayhan Küçükmanisa. “Görme Engelliler için Nesne tanıma Ve Resim altyazısını Derin öğrenme Teknikleriyle Entegre Eden Verimli Bir Aktivite tanıma Modeli”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 39, sy. 4 (Mayıs 2024): 2177-86. https://doi.org/10.17341/gazimmfd.1245400.
EndNote Kilimci ZH, Küçükmanisa A (01 Mayıs 2024) Görme engelliler için nesne tanıma ve resim altyazısını derin öğrenme teknikleriyle entegre eden verimli bir aktivite tanıma modeli. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 39 4 2177–2186.
IEEE Z. H. Kilimci ve A. Küçükmanisa, “Görme engelliler için nesne tanıma ve resim altyazısını derin öğrenme teknikleriyle entegre eden verimli bir aktivite tanıma modeli”, GUMMFD, c. 39, sy. 4, ss. 2177–2186, 2024, doi: 10.17341/gazimmfd.1245400.
ISNAD Kilimci, Zeynep Hilal - Küçükmanisa, Ayhan. “Görme Engelliler için Nesne tanıma Ve Resim altyazısını Derin öğrenme Teknikleriyle Entegre Eden Verimli Bir Aktivite tanıma Modeli”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi 39/4 (Mayıs 2024), 2177-2186. https://doi.org/10.17341/gazimmfd.1245400.
JAMA Kilimci ZH, Küçükmanisa A. Görme engelliler için nesne tanıma ve resim altyazısını derin öğrenme teknikleriyle entegre eden verimli bir aktivite tanıma modeli. GUMMFD. 2024;39:2177–2186.
MLA Kilimci, Zeynep Hilal ve Ayhan Küçükmanisa. “Görme Engelliler için Nesne tanıma Ve Resim altyazısını Derin öğrenme Teknikleriyle Entegre Eden Verimli Bir Aktivite tanıma Modeli”. Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi, c. 39, sy. 4, 2024, ss. 2177-86, doi:10.17341/gazimmfd.1245400.
Vancouver Kilimci ZH, Küçükmanisa A. Görme engelliler için nesne tanıma ve resim altyazısını derin öğrenme teknikleriyle entegre eden verimli bir aktivite tanıma modeli. GUMMFD. 2024;39(4):2177-86.