Research Article
BibTex RIS Cite

İnsansız Su Altı Araçlarında Derin Öğrenme Yöntemleri

Year 2020, Ejosat Special Issue 2020 (ICCEES), 345 - 350, 05.10.2020
https://doi.org/10.31590/ejosat.804599

Abstract

İnsansız su altı araçları (ROV/AUV) su altında yüzebilen, otonom ve uzaktan kontrol edilebilen robotik sistemlerdir. Günümüzde deniz kuvvetleri, savunma sanayii ve birçok alanda insansız su altı araçlarının operasyonel kullanımına olan ilgi giderek artmıştır. İnsansız su altı araçları doğal kaynakların korunması, çevresel kaynakların korunması ve bunların incelenmesi, muhtelif inşaat faaliyetleri, kıyı ve ülke güvenliğinin sağlanması gibi farklı ve çeşitli amaçlarla sivil ve askeri uygulamalarda yürütülmekte olup, son yıllarda yapılan akademik ve endüstriyel araştırmaların büyük bir kısmına yardım eden, gözlem ve keşif özelliğine sahip, uzaktan kontrol edilebilen araçlardır. Bu çalışmada insansız su altı araçlarında görüntü işleme ve derin öğrenme yöntemlerinden bahsedilmektedir. Yapay zeka tekniğinin kapsamlı bir incelemesini sunmaktadır ve ülkemiz savunma sanayiine katkıda bulunmak amaçlanmaktadır. Otonom sürüş için Raspberry Pi 3 mikroişlemcisi kullanılmış olup kamera olarak ise Raspberry Pi 3 ile uyumlu olan Raspberry Pi Kamera Modülü tercih edilmiştir. Yazılım dili olarak ise Python kullanılmıştır. Kameradan alınan görüntülerdeki nesneler, OpenCV kütüphanesi ve derin öğrenme kullanılarak tespit edilmiştir. Nesne tespiti ve takibi için derin öğrenme kütüphanesi olan TensorFlow kütüphanesi kullanılmıştır. Model olarak öncelikle Faster-RCNN-Inception-V2 modeli kullanılmıştır. Fakat FasterRCNN-Inception-V2 modeli ile Raspberry Pi 3 FPS bakımından iyi bir performans gösterememiştir. Bu nedenle çoğu gerçek zamanlı nesne algılama uygulaması için yeterince hızlı olan SSDLite-MobileNet-V2 modeli tercih edilmiştir.

References

  • Aizenberg, I. N., Aizenberg, N. N., & Vandewalle, J. (2000). Multiple-Valued Threshold Logic and Multi-Valued Neurons. In Multi-Valued and Universal Binary Neurons (pp. 25-80): Springer.
  • Alam, K., Ray, T., & Anavatti, S. G. (2014). Design and construction of an autonomous underwater vehicle. Neurocomputing, 142, 16-29.
  • Baykara, M., & Daş, R. (2013). Real time face recognition and tracking system. Paper presented at the 2013 International Conference on Electronics, Computer and Computation (ICECCO).
  • CANLI, G. A., KURTOĞLU, İ., CANLI, M. O., & TUNA, Ö. S. DÜNYADA VE ÜLKEMİZDE İNSANSIZ SUALTI ARAÇLARI İSAA-AUV & ROV TASARIM VE UYGULAMALARI. GİDB Dergi(04), 43-75.
  • Cömert, Z., Kocamaz, A. F., & Subha, V. (2018). Prognostic model based on image-based time-frequency features and genetic algorithm for fetal hypoxia assessment. Computers in Biology and Medicine, 99, 85-97.
  • Daş, R., Polat, B., & Tuna, G. Derin Öğrenme ile Resim ve Videolarda Nesnelerin Tanınması ve Takibi. Fırat Üniversitesi Mühendislik Bilimleri Dergisi, 31(2), 571-581.
  • Deng, L., & Yu, D. (2014). Deep learning: methods and applications. Foundations and trends in signal processing, 7(3–4), 197-387.
  • Galvez, R. L., Bandala, A. A., Dadios, E. P., Vicerra, R. R. P., & Maningo, J. M. Z. (2018). Object detection using convolutional neural networks. Paper presented at the TENCON 2018-2018 IEEE Region 10 Conference.
  • KIZRAK, M. A., & BOLAT, B. (2018). Derin öğrenme ile kalabalık analizi üzerine detaylı bir araştırma. Bilişim Teknolojileri Dergisi, 11(3), 263-286.
  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • Lee, K., & Son, M. (2017). Deepspotcloud: leveraging cross-region gpu spot instances for deep learning. Paper presented at the 2017 IEEE 10th International Conference on Cloud Computing (CLOUD).
  • Lohr, S. (2012). The age of big data. New York Times, 11(2012).
  • Özbaysar, E., & Borandağ, E. (2018). Vehicle plate tracking system. Paper presented at the 2018 26th Signal Processing and Communications Applications Conference (SIU).
  • Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61, 85-117.
  • Şeker, A., Diri, B., & Balık, H. H. (2017). Derin öğrenme yöntemleri ve uygulamaları hakkında bir inceleme. Gazi Mühendislik Bilimleri Dergisi, 3(3), 47-64.
  • Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
  • Tokui, S., Oono, K., Hido, S., & Clayton, J. (2015). Chainer: a next-generation open source framework for deep learning. Paper presented at the Proceedings of workshop on machine learning systems (LearningSys) in the twenty-ninth annual conference on neural information processing systems (NIPS).
  • Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. Paper presented at the Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001.
  • Wason, R. (2018). Deep learning: Evolution and expansion. Cognitive Systems Research, 52, 701-708.
  • Zhang, L., Yang, F., Zhang, Y. D., & Zhu, Y. J. (2016). Road crack detection using deep convolutional neural network. Paper presented at the 2016 IEEE international conference on image processing (ICIP).

Deep Learning Methods in Unmanned Underwater Vehicles

Year 2020, Ejosat Special Issue 2020 (ICCEES), 345 - 350, 05.10.2020
https://doi.org/10.31590/ejosat.804599

Abstract

Unmanned underwater vehicles (ROV/AUV) are robotic systems that can float underwater, are autonomous and remotely controlled. Nowadays, the Navy has focused on the operational use of unmanned underwater vehicles in the defense industry and in many areas, and has increased interest in this issue. Unmanned underwater vehicles. Unmanned underwater vehicles are carried out in civilian and military applications for different and varied purposes like protection of national sources, protection of environmental sources and researchs about that, miscellaneous construction activities, police of coastal and country. Also they can use civil and military applications and they helped they have helped with much of the academic and industrial research done in recent years. To sum up they are remotely controlled vehicles with observation and exploration features. This article discusses image processing and deep learning techniques in unmanned underwater vehicles. Also it presents an in-depth review of the artificial intelligence technique and aims to contribute to our country's defense industry. The options that will enable the vehicle to succeed in autonomous missions are mentioned. The Raspberry Pi 3 microprocessor was used in autonomous missions. The Raspberry Pi Camera Module, which is compatible with the Raspberry Pi 3, is preferred. Python was used as a programming language during software process. Objects in the images taken from the camera have been identified using the OpenCV library and deep learning. The TensorFlow library which deep learning library, was used for object detection and tracking. At the beginning The Faster-RCNN-Inception-V2 model was used as the Model. However, Faster-RCNN-Inception-V2 model and Raspberry Pi 3 FPS cooperation working did not show a good performance. For this reason, the SSDLite-MobileNet-V2 model, which is fast enough for most real-time object detection applications, is preferred.

References

  • Aizenberg, I. N., Aizenberg, N. N., & Vandewalle, J. (2000). Multiple-Valued Threshold Logic and Multi-Valued Neurons. In Multi-Valued and Universal Binary Neurons (pp. 25-80): Springer.
  • Alam, K., Ray, T., & Anavatti, S. G. (2014). Design and construction of an autonomous underwater vehicle. Neurocomputing, 142, 16-29.
  • Baykara, M., & Daş, R. (2013). Real time face recognition and tracking system. Paper presented at the 2013 International Conference on Electronics, Computer and Computation (ICECCO).
  • CANLI, G. A., KURTOĞLU, İ., CANLI, M. O., & TUNA, Ö. S. DÜNYADA VE ÜLKEMİZDE İNSANSIZ SUALTI ARAÇLARI İSAA-AUV & ROV TASARIM VE UYGULAMALARI. GİDB Dergi(04), 43-75.
  • Cömert, Z., Kocamaz, A. F., & Subha, V. (2018). Prognostic model based on image-based time-frequency features and genetic algorithm for fetal hypoxia assessment. Computers in Biology and Medicine, 99, 85-97.
  • Daş, R., Polat, B., & Tuna, G. Derin Öğrenme ile Resim ve Videolarda Nesnelerin Tanınması ve Takibi. Fırat Üniversitesi Mühendislik Bilimleri Dergisi, 31(2), 571-581.
  • Deng, L., & Yu, D. (2014). Deep learning: methods and applications. Foundations and trends in signal processing, 7(3–4), 197-387.
  • Galvez, R. L., Bandala, A. A., Dadios, E. P., Vicerra, R. R. P., & Maningo, J. M. Z. (2018). Object detection using convolutional neural networks. Paper presented at the TENCON 2018-2018 IEEE Region 10 Conference.
  • KIZRAK, M. A., & BOLAT, B. (2018). Derin öğrenme ile kalabalık analizi üzerine detaylı bir araştırma. Bilişim Teknolojileri Dergisi, 11(3), 263-286.
  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • Lee, K., & Son, M. (2017). Deepspotcloud: leveraging cross-region gpu spot instances for deep learning. Paper presented at the 2017 IEEE 10th International Conference on Cloud Computing (CLOUD).
  • Lohr, S. (2012). The age of big data. New York Times, 11(2012).
  • Özbaysar, E., & Borandağ, E. (2018). Vehicle plate tracking system. Paper presented at the 2018 26th Signal Processing and Communications Applications Conference (SIU).
  • Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61, 85-117.
  • Şeker, A., Diri, B., & Balık, H. H. (2017). Derin öğrenme yöntemleri ve uygulamaları hakkında bir inceleme. Gazi Mühendislik Bilimleri Dergisi, 3(3), 47-64.
  • Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
  • Tokui, S., Oono, K., Hido, S., & Clayton, J. (2015). Chainer: a next-generation open source framework for deep learning. Paper presented at the Proceedings of workshop on machine learning systems (LearningSys) in the twenty-ninth annual conference on neural information processing systems (NIPS).
  • Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. Paper presented at the Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001.
  • Wason, R. (2018). Deep learning: Evolution and expansion. Cognitive Systems Research, 52, 701-708.
  • Zhang, L., Yang, F., Zhang, Y. D., & Zhu, Y. J. (2016). Road crack detection using deep convolutional neural network. Paper presented at the 2016 IEEE international conference on image processing (ICIP).
There are 20 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Articles
Authors

Ercan Ataner 0000-0002-4548-9968

Büşra Özdeş 0000-0002-3902-3053

Gamze Öztürk 0000-0003-3278-0016

Taha Yasin Can Çelik 0000-0002-3955-0953

Akif Durdu 0000-0002-5611-2322

Hakan Terzioğlu 0000-0001-5928-8457

Publication Date October 5, 2020
Published in Issue Year 2020 Ejosat Special Issue 2020 (ICCEES)

Cite

APA Ataner, E., Özdeş, B., Öztürk, G., Çelik, T. Y. C., et al. (2020). Deep Learning Methods in Unmanned Underwater Vehicles. Avrupa Bilim Ve Teknoloji Dergisi345-350. https://doi.org/10.31590/ejosat.804599