Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2020, Cilt: 2 Sayı: 2, 38 - 50, 28.12.2020

Öz

Kaynakça

  • Amato, F., Moscato, V., Picariello, A., Colace, F., Santo, M.D., Schreiber, F.A., Tanca, L. (2017). Big data meets digital cultural heritage: Design and implementation of scrabs, a smart context-aware browsing assistant for cultural environments. Journal on Computing and Cultural Heritage (JOCCH), 10 (1), 6.
  • Brown, B. (2007). Working the problems of tourism. Annals of Tourism Research, 34 (2), 364-383.
  • Cheng, Z., Shen, J. (2016). On very large scale test collection for landmark image search benchmarking. Signal Processing, 124, 13–26.
  • Chollet, F. (2015). Keras: Deep learning for humans, Github. https://github.com/keras-team/keras
  • CoreML Documentation, (2019). Converting Trained Models to Core ML, https://developer.apple.com/documentation/coreml/converting_trained_models_to_core_ml [Accessed on 2 November 2019].
  • CoreML Framework, (2019). CoreML Framework Overview, https://developer.apple.com/ documentation/coreml [Accessed on 2 November 2019].
  • Core Location Framework, (2019). Core Location Framework Overview, https://developer.apple. com/documentation/corelocation [Accessed on 2 November 2019].
  • Deng, J., Berg, A., Satheesh, S., Su, H., Khosla, A., Fei-Fei, L. (2012). ImageNet Large Scale Visual Recognition Competition (ILSVRC2012). http://www.image-net.org/challenges/LSVRC/ 2012/.
  • Doğan Y., Yakar, M. (2018). GIS and three-dimensıonal modelıng for cultural herıtages. International Journal of Engineering and Geosciences, 3 (2), 50–55.
  • Gunawan, A.A., Surya, K., Meiliana. (2018). Brainwave classification of visual stimuli based on low cost EEG spectrogram using DenseNet. Procedia Computer Science, 135, 128–139.
  • Hays, J., Efros, A.A. (2008). IM2GPS: Estimating geographic information from a single image. In 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • He, K., Zhang, X., Ren, S., Sun, J. (2016a). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778.
  • He, K., Zhang, X., Ren, S., Sun, J. (2016b). Identity mappings in deep residual networks. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 630–645.
  • Huang, C., Xu, H., Xie, L., Zhu, J., Xu, C., Tang, Y. (2018). Large-scale semantic web image retrieval using bimodal deep learning techniques. Information Sciences, 430–431, 331–348.
  • Huang, F., Zhang, X., Zhao, Z., Li, Z., He, Y. (2018). Deep multi-view representation learning for social images. Applied Soft Computing Journal, 73, 106–118.
  • Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q. (2017). Densely connected convolutional networks. In Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition - CVPR 2017, pp. 2261–2269.
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019a). Where I am: Maiden’s Tower, http://www.istanbulkulturturizm.gov .tr/EN-171825/maidens-tower.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019b). Where I am: Sultanahmet Mosque (Blue Mosque), http://www.istanbulkulturturizm.gov.tr/EN-1 74344/sultanahmet-mosque-blue-mosque .html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019c). Where I am: Galata Tower, http://www.istanbulkulturturizm.gov. tr/EN-171079/galata-tower.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019d). Where I am: Ayasofya Museum, http://www.istanbulkulturturizm. gov.tr/EN-171028/ayasofya-museum.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019e). Where I am: Ortakoy Mosque, http://www.istanbulkulturturizm.gov. tr/TR-209469/ortakoy-camii.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019f). Where I am: Topkapı Palace, http://www.istanbulkulturturizm.gov. tr/EN-174359/topkapi-palace.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019g). Where I am: Dolmabahçe Palace, http://www.istanbulkulturturizm.gov. tr/EN-171052/dolmabahce-palace.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019h). Where I am: Dikilitas (Theodosius Sütunu), http:// www.istanbulkulturturizm.gov.tr/TR-209450/ dikilitas-theodosius-sutunu.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019i). Where I am: Dolmabahce Clock Tower, http://www.istanbul kulturturizm.gov.tr/EN-171039/dolmabahce-clock-tower.html [Accessed on 12 September 2019].
  • Jiang, B., Yang, J., Lv, Z., Tian, K., Meng, Q., Yan, Y. (2017). Internet cross-media retrieval based on deep learning. Journal of Visual Communication and Image Representation, 48, 356–366.
  • Keras Documentation, (2019). https://keras.io/ optimizers/ [Accessed on 25 August 2019].
  • Lin, M., Chen, Q., Yan, S. (2013). Network in network. arXiv preprint arXiv:1312.4400.
  • MapKit Framework, (2019). https://developer.apple.com/documentation/mapkit [Accessed on 12 September 2019].
  • Maskrey M., Wang W. (2018). Using facial and text recognition. In Pro iPhone Development with Swift 4, pp. 285-315, Apress, Berkeley, CA.
  • McGookin, D., Tahiroğlu, K., Vaittinen, T., Kytö, M., Monastero, B., Carlos Vasquez, J. (2019). Investigating tangential access for location-based digital cultural heritage applications. International Journal of Human Computer Studies, 122, 196–210.
  • Mulazimoglu, E., Basaraner, M. (2019). User-centred design and evaluation of multimodal tourist maps. International Journal of Engineering and Geosciences, 4 (3), 115–128.
  • Nawaz, M., Sewissy, A.A., Soliman, T.H.A. (2018). Multi-class breast cancer classification using deep learning convolutional neural network. International Journal of Advanced Computer Science and Applications, 9 (6), 316–332.
  • Nibali, A., He, Z., Wollersheim, D. (2017). Pulmonary nodule classification with deep residual networks. International Journal of Computer Assisted Radiology and Surgery, 12 (10), 1799–1808.
  • Parkhi, O.M., Vedaldi, A., Zisserman, A. (2015). Deep face recognition. In Proceedings of the British Machine Vision Conference 2015, pp. 41.1–41.12.
  • Patterson J., Gibson A. (2017). Deep learning: A Practitioner’s approach. O’Reilly Media, Sebastopol, CA, USA.
  • Richards, G. (2018). Cultural tourism: A review of recent research and trends. Journal of Hospitality and Tourism Management, 36, 12–21.
  • Rothe, R., Timofte, R., Van Gool, L. (2018). Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision, 126 (2–4), 144–157.
  • Schaul, T., Antonoglou, I., Silver, D. (2014). Unit tests for stochastic optimization. In International Conference on Learning Representations. arXiv:1312.6055.
  • Shukla, P., Rautelai, B., Mittal, A. (2017). A computer vision framework for automatic description of Indian monuments. In 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), 4–7 December, Jaipur, India.
  • Simonyan, K., Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Soon, F. C., Khaw, H.Y., Chuah, J.H., Kanesan, J. (2018). Hyper-parameters optimisation of deep CNN architecture for vehicle logo recognition. IET Intelligent Transport Systems, 12 (8), 939–946.
  • Şasi A., Yakar, M. (2018). Photogrammetric modelling of Hasbey Dar'ülhuffaz (Masjıd) using an unmanned aerıal vehıcle. International Journal of Engineering and Geosciences, 3 (1), 6–11.
  • Termritthikun, C., Kanprachar, S., Muneesawang, P. (2018). NU-LiteNet: Mobile landmark recognition using convolutional neural networks. arXiv preprint arXiv:1810.01074v1.
  • Tzelepi, M., Tefas, A. (2018). Deep convolutional image retrieval: A general framework. Signal Processing: Image Communication, 63, 30–43.
  • UNESCO, (2006). Cities Named '2010 European Capital of Culture' include World Heritage sites. https://whc.unesco.org/en/news/248/ [Accessed on 12 September 2019].
  • Weyand, T., Leibe, B. (2015). Visual landmark recognition from Internet photo collections: A large-scale evaluation. Computer Vision and Image Understanding, 135, 1–15.
  • Xu, H., Huang, C., Wang, D. (2019). Enhancing semantic image retrieval with limited labeled examples via deep learning. Knowledge-Based Systems, 163, 252–266.
  • Yorulmaz M., Celik, O.C. (1995). Structural analysis of the Bozdoğan (Valens) aqueduct in Istanbul. In Arch Bridges, Ed. C.Melbourne, Thomas Telford, London, 175–180.
  • Zeiler, M.D. (2012). ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
  • Zhang, D., Han, X., Deng, C. (2018). Review on the research and practice of deep learning and reinforcement learning in smart grids. CSEE Journal of Power and Energy Systems, 4 (3), 362–370.
  • Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A. (2018). Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40 (6), 1452–1464.
  • URL1: https://github.com/fchollet/deep-learning- models [Accessed 10 September 2019]
  • URL2: https://github.com/raghakot/keras-resnet [Accessed 10 September 2019]
  • URL3: https://github.com/flyyufelix/DenseNet-Keras [Accessed 10 September 2019]

A Deep learning integrated mobile application for historic landmark recognition: A case study of Istanbul

Yıl 2020, Cilt: 2 Sayı: 2, 38 - 50, 28.12.2020

Öz

Recent developments in mobile device technology and artificial intelligent systems took the attention of many researchers. Historical sites and landmarks are the indispensable heritage of cities. Historic landmark recognition, including detailed attribute information, can connect people directly with the history of the cities, although they may not be familiar with the impressive historical monument. This can be achieved by integrating mobile and deep learning technologies. Therefore, we focused on establishing a deep learning (DL) based mobile historic landmark recognition system in this study. The VGG (16, 19), ResNet (50, 101, 152), DenseNet (121, 169, 201) DL architectures were trained by end-to-end learning techniques for the recognition of ten historic landmarks from the metropolitan city of Istanbul, Turkey. The dataset was prepared by collecting images of ten historical buildings from the image hosting services. The developed prototype automatically and instantly recognizes these historic landmarks from scene images and immediately provides related historic information as well as route planning. The experimental results indicate that DenseNet-169 architecture is very effective for our dataset with 96.3% accuracy. This study has shown that deep learning offers a promising alternative means of recognizing historic landmarks.  

Kaynakça

  • Amato, F., Moscato, V., Picariello, A., Colace, F., Santo, M.D., Schreiber, F.A., Tanca, L. (2017). Big data meets digital cultural heritage: Design and implementation of scrabs, a smart context-aware browsing assistant for cultural environments. Journal on Computing and Cultural Heritage (JOCCH), 10 (1), 6.
  • Brown, B. (2007). Working the problems of tourism. Annals of Tourism Research, 34 (2), 364-383.
  • Cheng, Z., Shen, J. (2016). On very large scale test collection for landmark image search benchmarking. Signal Processing, 124, 13–26.
  • Chollet, F. (2015). Keras: Deep learning for humans, Github. https://github.com/keras-team/keras
  • CoreML Documentation, (2019). Converting Trained Models to Core ML, https://developer.apple.com/documentation/coreml/converting_trained_models_to_core_ml [Accessed on 2 November 2019].
  • CoreML Framework, (2019). CoreML Framework Overview, https://developer.apple.com/ documentation/coreml [Accessed on 2 November 2019].
  • Core Location Framework, (2019). Core Location Framework Overview, https://developer.apple. com/documentation/corelocation [Accessed on 2 November 2019].
  • Deng, J., Berg, A., Satheesh, S., Su, H., Khosla, A., Fei-Fei, L. (2012). ImageNet Large Scale Visual Recognition Competition (ILSVRC2012). http://www.image-net.org/challenges/LSVRC/ 2012/.
  • Doğan Y., Yakar, M. (2018). GIS and three-dimensıonal modelıng for cultural herıtages. International Journal of Engineering and Geosciences, 3 (2), 50–55.
  • Gunawan, A.A., Surya, K., Meiliana. (2018). Brainwave classification of visual stimuli based on low cost EEG spectrogram using DenseNet. Procedia Computer Science, 135, 128–139.
  • Hays, J., Efros, A.A. (2008). IM2GPS: Estimating geographic information from a single image. In 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • He, K., Zhang, X., Ren, S., Sun, J. (2016a). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778.
  • He, K., Zhang, X., Ren, S., Sun, J. (2016b). Identity mappings in deep residual networks. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 630–645.
  • Huang, C., Xu, H., Xie, L., Zhu, J., Xu, C., Tang, Y. (2018). Large-scale semantic web image retrieval using bimodal deep learning techniques. Information Sciences, 430–431, 331–348.
  • Huang, F., Zhang, X., Zhao, Z., Li, Z., He, Y. (2018). Deep multi-view representation learning for social images. Applied Soft Computing Journal, 73, 106–118.
  • Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q. (2017). Densely connected convolutional networks. In Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition - CVPR 2017, pp. 2261–2269.
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019a). Where I am: Maiden’s Tower, http://www.istanbulkulturturizm.gov .tr/EN-171825/maidens-tower.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019b). Where I am: Sultanahmet Mosque (Blue Mosque), http://www.istanbulkulturturizm.gov.tr/EN-1 74344/sultanahmet-mosque-blue-mosque .html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019c). Where I am: Galata Tower, http://www.istanbulkulturturizm.gov. tr/EN-171079/galata-tower.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019d). Where I am: Ayasofya Museum, http://www.istanbulkulturturizm. gov.tr/EN-171028/ayasofya-museum.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019e). Where I am: Ortakoy Mosque, http://www.istanbulkulturturizm.gov. tr/TR-209469/ortakoy-camii.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019f). Where I am: Topkapı Palace, http://www.istanbulkulturturizm.gov. tr/EN-174359/topkapi-palace.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019g). Where I am: Dolmabahçe Palace, http://www.istanbulkulturturizm.gov. tr/EN-171052/dolmabahce-palace.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019h). Where I am: Dikilitas (Theodosius Sütunu), http:// www.istanbulkulturturizm.gov.tr/TR-209450/ dikilitas-theodosius-sutunu.html [Accessed on 12 September 2019].
  • IPDCT - Istanbul Provincial Directorate of Culture and Tourism, (2019i). Where I am: Dolmabahce Clock Tower, http://www.istanbul kulturturizm.gov.tr/EN-171039/dolmabahce-clock-tower.html [Accessed on 12 September 2019].
  • Jiang, B., Yang, J., Lv, Z., Tian, K., Meng, Q., Yan, Y. (2017). Internet cross-media retrieval based on deep learning. Journal of Visual Communication and Image Representation, 48, 356–366.
  • Keras Documentation, (2019). https://keras.io/ optimizers/ [Accessed on 25 August 2019].
  • Lin, M., Chen, Q., Yan, S. (2013). Network in network. arXiv preprint arXiv:1312.4400.
  • MapKit Framework, (2019). https://developer.apple.com/documentation/mapkit [Accessed on 12 September 2019].
  • Maskrey M., Wang W. (2018). Using facial and text recognition. In Pro iPhone Development with Swift 4, pp. 285-315, Apress, Berkeley, CA.
  • McGookin, D., Tahiroğlu, K., Vaittinen, T., Kytö, M., Monastero, B., Carlos Vasquez, J. (2019). Investigating tangential access for location-based digital cultural heritage applications. International Journal of Human Computer Studies, 122, 196–210.
  • Mulazimoglu, E., Basaraner, M. (2019). User-centred design and evaluation of multimodal tourist maps. International Journal of Engineering and Geosciences, 4 (3), 115–128.
  • Nawaz, M., Sewissy, A.A., Soliman, T.H.A. (2018). Multi-class breast cancer classification using deep learning convolutional neural network. International Journal of Advanced Computer Science and Applications, 9 (6), 316–332.
  • Nibali, A., He, Z., Wollersheim, D. (2017). Pulmonary nodule classification with deep residual networks. International Journal of Computer Assisted Radiology and Surgery, 12 (10), 1799–1808.
  • Parkhi, O.M., Vedaldi, A., Zisserman, A. (2015). Deep face recognition. In Proceedings of the British Machine Vision Conference 2015, pp. 41.1–41.12.
  • Patterson J., Gibson A. (2017). Deep learning: A Practitioner’s approach. O’Reilly Media, Sebastopol, CA, USA.
  • Richards, G. (2018). Cultural tourism: A review of recent research and trends. Journal of Hospitality and Tourism Management, 36, 12–21.
  • Rothe, R., Timofte, R., Van Gool, L. (2018). Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision, 126 (2–4), 144–157.
  • Schaul, T., Antonoglou, I., Silver, D. (2014). Unit tests for stochastic optimization. In International Conference on Learning Representations. arXiv:1312.6055.
  • Shukla, P., Rautelai, B., Mittal, A. (2017). A computer vision framework for automatic description of Indian monuments. In 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), 4–7 December, Jaipur, India.
  • Simonyan, K., Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Soon, F. C., Khaw, H.Y., Chuah, J.H., Kanesan, J. (2018). Hyper-parameters optimisation of deep CNN architecture for vehicle logo recognition. IET Intelligent Transport Systems, 12 (8), 939–946.
  • Şasi A., Yakar, M. (2018). Photogrammetric modelling of Hasbey Dar'ülhuffaz (Masjıd) using an unmanned aerıal vehıcle. International Journal of Engineering and Geosciences, 3 (1), 6–11.
  • Termritthikun, C., Kanprachar, S., Muneesawang, P. (2018). NU-LiteNet: Mobile landmark recognition using convolutional neural networks. arXiv preprint arXiv:1810.01074v1.
  • Tzelepi, M., Tefas, A. (2018). Deep convolutional image retrieval: A general framework. Signal Processing: Image Communication, 63, 30–43.
  • UNESCO, (2006). Cities Named '2010 European Capital of Culture' include World Heritage sites. https://whc.unesco.org/en/news/248/ [Accessed on 12 September 2019].
  • Weyand, T., Leibe, B. (2015). Visual landmark recognition from Internet photo collections: A large-scale evaluation. Computer Vision and Image Understanding, 135, 1–15.
  • Xu, H., Huang, C., Wang, D. (2019). Enhancing semantic image retrieval with limited labeled examples via deep learning. Knowledge-Based Systems, 163, 252–266.
  • Yorulmaz M., Celik, O.C. (1995). Structural analysis of the Bozdoğan (Valens) aqueduct in Istanbul. In Arch Bridges, Ed. C.Melbourne, Thomas Telford, London, 175–180.
  • Zeiler, M.D. (2012). ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
  • Zhang, D., Han, X., Deng, C. (2018). Review on the research and practice of deep learning and reinforcement learning in smart grids. CSEE Journal of Power and Energy Systems, 4 (3), 362–370.
  • Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A. (2018). Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40 (6), 1452–1464.
  • URL1: https://github.com/fchollet/deep-learning- models [Accessed 10 September 2019]
  • URL2: https://github.com/raghakot/keras-resnet [Accessed 10 September 2019]
  • URL3: https://github.com/flyyufelix/DenseNet-Keras [Accessed 10 September 2019]
Toplam 55 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Araştırma Makaleleri
Yazarlar

Bülent Bayram 0000-0002-4248-116X

Batuhan Kılıç 0000-0002-0529-8569

Furkan Özoğlu 0000-0002-6276-3762

Fırat Erdem 0000-0002-6163-1979

Tolga Bakirman 0000-0001-7828-9666

Sinan Sivri 0000-0002-8591-9555

Onur Can Bayrak 0000-0002-5147-747X

Ahmet Delen 0000-0003-3091-8501

Yayımlanma Tarihi 28 Aralık 2020
Yayımlandığı Sayı Yıl 2020 Cilt: 2 Sayı: 2

Kaynak Göster

APA Bayram, B., Kılıç, B., Özoğlu, F., Erdem, F., vd. (2020). A Deep learning integrated mobile application for historic landmark recognition: A case study of Istanbul. Mersin Photogrammetry Journal, 2(2), 38-50.
AMA Bayram B, Kılıç B, Özoğlu F, Erdem F, Bakirman T, Sivri S, Bayrak OC, Delen A. A Deep learning integrated mobile application for historic landmark recognition: A case study of Istanbul. MEPHOJ. Aralık 2020;2(2):38-50.
Chicago Bayram, Bülent, Batuhan Kılıç, Furkan Özoğlu, Fırat Erdem, Tolga Bakirman, Sinan Sivri, Onur Can Bayrak, ve Ahmet Delen. “A Deep Learning Integrated Mobile Application for Historic Landmark Recognition: A Case Study of Istanbul”. Mersin Photogrammetry Journal 2, sy. 2 (Aralık 2020): 38-50.
EndNote Bayram B, Kılıç B, Özoğlu F, Erdem F, Bakirman T, Sivri S, Bayrak OC, Delen A (01 Aralık 2020) A Deep learning integrated mobile application for historic landmark recognition: A case study of Istanbul. Mersin Photogrammetry Journal 2 2 38–50.
IEEE B. Bayram, “A Deep learning integrated mobile application for historic landmark recognition: A case study of Istanbul”, MEPHOJ, c. 2, sy. 2, ss. 38–50, 2020.
ISNAD Bayram, Bülent vd. “A Deep Learning Integrated Mobile Application for Historic Landmark Recognition: A Case Study of Istanbul”. Mersin Photogrammetry Journal 2/2 (Aralık 2020), 38-50.
JAMA Bayram B, Kılıç B, Özoğlu F, Erdem F, Bakirman T, Sivri S, Bayrak OC, Delen A. A Deep learning integrated mobile application for historic landmark recognition: A case study of Istanbul. MEPHOJ. 2020;2:38–50.
MLA Bayram, Bülent vd. “A Deep Learning Integrated Mobile Application for Historic Landmark Recognition: A Case Study of Istanbul”. Mersin Photogrammetry Journal, c. 2, sy. 2, 2020, ss. 38-50.
Vancouver Bayram B, Kılıç B, Özoğlu F, Erdem F, Bakirman T, Sivri S, Bayrak OC, Delen A. A Deep learning integrated mobile application for historic landmark recognition: A case study of Istanbul. MEPHOJ. 2020;2(2):38-50.