Araştırma Makalesi
BibTex RIS Kaynak Göster

AI-Embedded UAV System for Detecting and Pursuing Unwanted UAVs

Yıl 2024, Cilt: 12 Sayı: 1, 1 - 13, 31.01.2024
https://doi.org/10.21541/apjess.1349856

Öz

In recent years, the use of unmanned aerial vehicle (UAV) platforms in civil and military applications has surged, highlighting the critical role of artificial intelligence (AI) embedded UAV systems in the future. This study introduces the Autonomous Drone (Vechür-SIHA), a novel AI-embedded UAV system designed for real-time detection and tracking of other UAVs during flight sequences. Leveraging advanced object detection algorithms and an LSTM-based tracking mechanism, our system achieves an impressive 80% accuracy in drone detection, even in challenging conditions like varying backgrounds and adverse weather.
Our system boasts the capability to simultaneously track multiple drones within its field of view, maintaining flight for up to 35 minutes, making it ideal for extended missions that require continuous UAV tracking. Moreover, it can lock onto and track other UAVs in mid-air for durations of 4-10 seconds without losing contact, a feature with significant potential for security applications.
This research marks a substantial contribution to the development of AI-embedded UAV systems, with broad implications across diverse domains such as search and rescue operations, border security, and forest fire prevention. These results provide a solid foundation for future research, fostering the creation of similar systems tailored to different applications, ultimately enhancing the efficiency and safety of UAV operations. The novel approach to real-time UAV detection and tracking presented here holds promise for driving innovations in UAV technology and its diverse applications.

Proje Numarası

078-2022 and 145-2023

Kaynakça

  • Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (23 Apr 2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. Taiwan: Academia Sinica, Taiwan.
  • Huang, Z., Xu, W., & Yu, K. (09 Aug 2015). Bidirectional LSTM-CRF Models for Sequence Tagging. https://arxiv.org/pdf/1508.01991.pdf.
  • Kalchbrenner, N., Grefenstette, E., & Blunsom, P. (2014). A Convolutional Neural Network for Modelling Sentences. Department of the Computer Science University of Oxford.
  • Lee, H., & Kim, H. (2017). Trajectory Tracking Control of Multirotors from Modelling to Experiments: A Survey. International Journal of Control, Automation and Systems 15(1) (2017) 281-292.
  • Lon, J., Shelhamer, E., & Darrell, T. (2015). Fully Convolutional Networks for Semantic Segmentation. California: UC Berkeley.
  • Redmon, J., & Farhadi, A. (8 Apr 2018). YOLOv3: An Incremental Improvement. Washington: University of Washington.
  • BAŞARAN, E. (Aralık 2017). PERVANE PERFORMANSININ ANALİTİK VE SAYISAL YÖNTEMLERLE HESABI. TOBB Ekonomi ve Teknoloji Üniveritesi.
  • Braun, S. (5 Jun 2018). LSTM Benchmarks for Deep Learning Frameworks. https://arxiv.org/pdf/1806.01818.pdf.
  • Hoffmann, G., & Waslander, S. (2009). Quadrotor Helicopter Trajectory Tracking Control. American Institute of Aeronautics and Astronautics.
  • Noh, H., Hong, S., & Han, B. (2015). Learning Deconvolution Network for Semantic Segmentation. Department of Computer Science and Engineering, POSTECH, Korea.
  • Radiocrafts. (2017). RF Modules Range Calculation and Test. By T.A.Lunder and P.M.Evjen, 1-13.
  • Jiang, Zicong, et al. "Real-time object detection method based on improved YOLOv4-tiny." ArXiv abs/2011.04244 (2020): n. page.
  • S. Minaeian, J. Liu, and Y.-J. Son, “Effective and efficient detection of moving targets from a UAV’s camera,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 2, pp. 497–506, Feb. 2018.
  • R. Opromolla, G. Fasano, and D. Accardo, “A vision-based approach to UAV detection and tracking in cooperative applications,” Sensors, vol. 18, no. 10, 2018.
  • V. Walter, M. Saska, and A. Franchi, “Fast mutual relative localization of UAVs using ultraviolet led markers,” in Proc. Int. Conf. Unmanned Aircr. Syst., 2018, pp. 1217–1226.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778
  • R. Mitchell and I. Chen, “Adaptive intrusion detection of malicious unmanned air vehicles using behavior rule specifications,” IEEE Trans. Syst., Man, Cybern. Syst., vol. 44, no. 5, pp. 593–604, May 2014.
  • Y. Tang et al., “Vision-aided multi-UAV autonomous flocking in GPSdenied environment,” IEEE Trans. Ind. Electron., vol. 66, no. 1, pp. 616–626, Jan. 2019
  • A. James, W. Jie, Y. Xulei, Y. Chenghao, N. B. Ngan, L. Yuxin, S. Yi, V. Chandrasekhar, and Z. Zeng, "Tracknet-a deep learning based fault detection for railway track inspection", 2018 International Conference on Intelligent Rail Transportation (ICIRT), pp. 1-5, 2018.
  • A.K. Singh, A. Swarup, A. Agarwal, and D. Singh, "Vision-based rail track extraction and monitoring through drone imagery", ICT Express, 5, 4, pp. 250-255, 2019
  • Azmat, U., Alotaibi, S. S., Abdelhaq, M., Alsufyani, N., Shorfuzzaman, M., Jalal, A., & Park, J. (2023). Aerial Insights: Deep Learning-based Human Action Recognition in Drone Imagery. IEEE Access.
  • Speth, S., Goncalves, A., Rigault, B., Suzuki, S., Bouazizi, M., Matsuo, Y., & Prendinger, H. (2022). Deep learning with RGB and thermal images onboard a drone for monitoring operations. Journal of Field Robotics, 39(6), 840-868.
  • Alam, S. S., Chakma, A., Rahman, M. H., Bin Mofidul, R., Alam, M. M., Utama, I. B. K. Y., & Jang, Y. M. (2023). RF-Enabled Deep-Learning-Assisted Drone Detection and Identification: An End-to-End Approach. Sensors, 23(9), 4202.
  • Zhang, C., Tian, Y., & Zhang, J. (2022). Complex image background segmentation for cable force estimation of urban bridges with drone‐captured video and deep learning. Structural Control and Health Monitoring, 29(4), e2910.
  • Gupta, H., & Verma, O. P. (2022). Monitoring and surveillance of urban road traffic using low altitude drone images: a deep learning approach. Multimedia Tools and Applications, 1-21.
Yıl 2024, Cilt: 12 Sayı: 1, 1 - 13, 31.01.2024
https://doi.org/10.21541/apjess.1349856

Öz

Destekleyen Kurum

Sakarya Üniversitesi Bilimsel Araştırmalar Koordinatörlüğü (BAP)

Proje Numarası

078-2022 and 145-2023

Kaynakça

  • Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (23 Apr 2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. Taiwan: Academia Sinica, Taiwan.
  • Huang, Z., Xu, W., & Yu, K. (09 Aug 2015). Bidirectional LSTM-CRF Models for Sequence Tagging. https://arxiv.org/pdf/1508.01991.pdf.
  • Kalchbrenner, N., Grefenstette, E., & Blunsom, P. (2014). A Convolutional Neural Network for Modelling Sentences. Department of the Computer Science University of Oxford.
  • Lee, H., & Kim, H. (2017). Trajectory Tracking Control of Multirotors from Modelling to Experiments: A Survey. International Journal of Control, Automation and Systems 15(1) (2017) 281-292.
  • Lon, J., Shelhamer, E., & Darrell, T. (2015). Fully Convolutional Networks for Semantic Segmentation. California: UC Berkeley.
  • Redmon, J., & Farhadi, A. (8 Apr 2018). YOLOv3: An Incremental Improvement. Washington: University of Washington.
  • BAŞARAN, E. (Aralık 2017). PERVANE PERFORMANSININ ANALİTİK VE SAYISAL YÖNTEMLERLE HESABI. TOBB Ekonomi ve Teknoloji Üniveritesi.
  • Braun, S. (5 Jun 2018). LSTM Benchmarks for Deep Learning Frameworks. https://arxiv.org/pdf/1806.01818.pdf.
  • Hoffmann, G., & Waslander, S. (2009). Quadrotor Helicopter Trajectory Tracking Control. American Institute of Aeronautics and Astronautics.
  • Noh, H., Hong, S., & Han, B. (2015). Learning Deconvolution Network for Semantic Segmentation. Department of Computer Science and Engineering, POSTECH, Korea.
  • Radiocrafts. (2017). RF Modules Range Calculation and Test. By T.A.Lunder and P.M.Evjen, 1-13.
  • Jiang, Zicong, et al. "Real-time object detection method based on improved YOLOv4-tiny." ArXiv abs/2011.04244 (2020): n. page.
  • S. Minaeian, J. Liu, and Y.-J. Son, “Effective and efficient detection of moving targets from a UAV’s camera,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 2, pp. 497–506, Feb. 2018.
  • R. Opromolla, G. Fasano, and D. Accardo, “A vision-based approach to UAV detection and tracking in cooperative applications,” Sensors, vol. 18, no. 10, 2018.
  • V. Walter, M. Saska, and A. Franchi, “Fast mutual relative localization of UAVs using ultraviolet led markers,” in Proc. Int. Conf. Unmanned Aircr. Syst., 2018, pp. 1217–1226.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778
  • R. Mitchell and I. Chen, “Adaptive intrusion detection of malicious unmanned air vehicles using behavior rule specifications,” IEEE Trans. Syst., Man, Cybern. Syst., vol. 44, no. 5, pp. 593–604, May 2014.
  • Y. Tang et al., “Vision-aided multi-UAV autonomous flocking in GPSdenied environment,” IEEE Trans. Ind. Electron., vol. 66, no. 1, pp. 616–626, Jan. 2019
  • A. James, W. Jie, Y. Xulei, Y. Chenghao, N. B. Ngan, L. Yuxin, S. Yi, V. Chandrasekhar, and Z. Zeng, "Tracknet-a deep learning based fault detection for railway track inspection", 2018 International Conference on Intelligent Rail Transportation (ICIRT), pp. 1-5, 2018.
  • A.K. Singh, A. Swarup, A. Agarwal, and D. Singh, "Vision-based rail track extraction and monitoring through drone imagery", ICT Express, 5, 4, pp. 250-255, 2019
  • Azmat, U., Alotaibi, S. S., Abdelhaq, M., Alsufyani, N., Shorfuzzaman, M., Jalal, A., & Park, J. (2023). Aerial Insights: Deep Learning-based Human Action Recognition in Drone Imagery. IEEE Access.
  • Speth, S., Goncalves, A., Rigault, B., Suzuki, S., Bouazizi, M., Matsuo, Y., & Prendinger, H. (2022). Deep learning with RGB and thermal images onboard a drone for monitoring operations. Journal of Field Robotics, 39(6), 840-868.
  • Alam, S. S., Chakma, A., Rahman, M. H., Bin Mofidul, R., Alam, M. M., Utama, I. B. K. Y., & Jang, Y. M. (2023). RF-Enabled Deep-Learning-Assisted Drone Detection and Identification: An End-to-End Approach. Sensors, 23(9), 4202.
  • Zhang, C., Tian, Y., & Zhang, J. (2022). Complex image background segmentation for cable force estimation of urban bridges with drone‐captured video and deep learning. Structural Control and Health Monitoring, 29(4), e2910.
  • Gupta, H., & Verma, O. P. (2022). Monitoring and surveillance of urban road traffic using low altitude drone images: a deep learning approach. Multimedia Tools and Applications, 1-21.
Toplam 25 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Derin Öğrenme, Yapay Görme
Bölüm Araştırma Makaleleri
Yazarlar

Ali Furkan Kamanlı 0000-0002-4155-5956

Proje Numarası 078-2022 and 145-2023
Yayımlanma Tarihi 31 Ocak 2024
Gönderilme Tarihi 25 Ağustos 2023
Yayımlandığı Sayı Yıl 2024 Cilt: 12 Sayı: 1

Kaynak Göster

IEEE A. F. Kamanlı, “AI-Embedded UAV System for Detecting and Pursuing Unwanted UAVs”, APJESS, c. 12, sy. 1, ss. 1–13, 2024, doi: 10.21541/apjess.1349856.

Academic Platform Journal of Engineering and Smart Systems