Research Article
BibTex RIS Cite

Derin Öğrenme ile İnsansız Hava Aracı Görüntülerinden Yaya Tespiti

Year 2018, , 64 - 69, 23.12.2018
https://doi.org/10.30518/jav.450913

Abstract

Bu
çalışmada, insansız hava araçlarından (İHA) elde edilen görüntüler kullanılarak
yaya tespitine yönelik bir uygulama gerçekleştirilmiştir. Bunun için, elde
edilen İHA görüntülerinden, derin öğrenme yöntemi yardımıyla özellik çıkarımı
yapılmıştır. İHA’lardan alınan görüntülerin işlenmesinde karşılaşılan zorluklardan
biride, büyük veri kümelerinin sınıflandırmasıdır. Bu çalışmada, bu zorluğun
üstesinden gelmek için Evrişimsel Sinir Ağları (ESA) kullanılmıştır. Bir diğer
zorluk ise bazı veri türlerinin azlığından dolayı kaliteli bir eğitim sürecinin
gerçekleştirilememesidir. Bu nedenle, eğitimin etkinliğini artırabilmek için
resim çoğaltma yöntemi uygulanmıştır. Önerilen yöntem ile İHA’dan elde edilen
yaya, bisikletli, araba, ağaç ve sokak lambası resimleri istenen boyutlarda
ayarlanarak ESA modellerinden AlexNet ve VGG16’ya giriş verisi olarak verilerek
özellik çıkarımı gerçekleştirilmiştir. Çıkarılan özellikler Destek Vektör
Makinesi (DVM) ile sınıflandırılmıştır. Sınıflandırma işlemi sayesinde hem yaya
ile diğer öğelerin ayrımı gerçekleştirilirken hem de AlexNet ile VGG16’nın
performansları karşılaştırılmıştır. Sonuçlar, geliştirilen yöntemin yayaları
belirlemede kullanılabilecek faydalı bir yöntem olabileceğini göstermiştir.

References

  • [1] M. Radovic, O. Adarkwa, and Q. Wang, “Object Recognition in Aerial Images Using Convolutional Neural Networks,” J. Imaging, vol. 3, no. 2, p. 21, 2017.
  • [2] L. Li, L., Fan, Y., Huang, X., & Tian, “Real-time UAV weed scout for selective weed control by adaptive robust control and machine learning algorithm,” Am. Soc. Agric. Biol. Eng. Annu. Int. Meet. ASABE, 2016.
  • [3] C. Hung, Z. Xu, and S. Sukkarieh, “Feature learning based approach for weed classification using high resolution aerial images from a digital camera mounted on a UAV,” Remote Sens., vol. 6, no. 12, pp. 12037–12054, 2014.
  • [4] P. Zarjam, J. Epps, F. Chen, and N. H. Lovell, “Estimating cognitive workload using wavelet entropy-based features during an arithmetic task,” Comput. Biol. Med., vol. 43, no. 12, pp. 2186–1295, 2013.
  • [5] S. W. Chen, S. S. Shivakumar, S. Dcunha, J. Das, E. Okon, C. Qu, C. J. Taylor, and V. Kumar, “Counting Apples and Oranges With Deep Learning: A Data-Driven Approach,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 781–788, 2017.
  • [6] W. Li, H. Fu, L. Yu, and A. Cracknell, “Deep Learning Based Oil Palm Tree Detection and Counting for High-Resolution Remote Sensing Images,” Remote Sens., vol. 9, no. 1, p. 22, 2016.
  • [7] N. V. Kim and M. A. Chervonenkis, “Situation control of unmanned aerial vehicles for road traffic monitoring,” Mod. Appl. Sci., vol. 9, no. 5, pp. 1–13, 2015.
  • [8] M. B. Bejiga, A. Zeggada, A. Nouffidj, and F. Melgani, “A convolutional neural network approach for assisting avalanche search and rescue operations with UAV imagery,” Remote Sens., vol. 9, no. 2, 2017.
  • [9] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-Learning-Detection.,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 7, pp. 1409–1422, 2011.
  • [10] F. De Smedt, D. Hulens, and T. Goedeme, “On-board real-time tracking of pedestrians on a UAV,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol. 2015–October, pp. 1–8, 2015.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Adv. Neural Inf. Process. Syst., pp. 1–9, 2012.
  • [12] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” pp. 1–14, 2014.
  • [13] A. Kaya, A. S. Keçeli, and A. B. Can, “Akciğer nodül özelliklerinin tahmininde çeşitli sınıflama stratejilerinin incelenmesi,” Gazi Üniversitesi Mimar. Mühendislik Fakültesi Derg. (2018), https//doi.or./10.17341/gazimmfd.416530.
  • [14] S. S. A. Robicquet, A. Sadeghian, A. Alahi, “Learning Social Etiquette: Human Trajectory Prediction In Crowded Scenes,” in European Conference on Computer Vision (ECCV), 2016.
  • [15] “Stanford Drone Dataset,” 2016. [Online]. Available: http://cvgl.stanford.edu/projects/uav_data/. [Accessed: 20-Jul-2018].
  • [16] Y. Zhou, H. Nejati, T.-T. Do, N.-M. Cheung, and L. Cheah, “Image-based Vehicle Analysis using Deep Neural Network: A Systematic Study,” 2016.
  • [17] A. Khazaee and A. Ebrahimzadeh, “Classification of electrocardiogram signals with support vector machines and genetic algorithms using power spectral features,” Biomed. Signal Process. Control, vol. 5, no. 4, pp. 252–263, 2010.
  • [18] A. Vedaldi and A. Zisserman, “Efficient Additive Kernels via Explicit Feature Maps,” Proc. {CVPR}, vol. 34, no. 3, pp. 480–492, 2010.
  • [19] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, “LIBLINEAR: A Library for Large Linear Classification,” J. Mach. Learn. Res., vol. 9, no. 2008, pp. 1871–1874, 2008.
Year 2018, , 64 - 69, 23.12.2018
https://doi.org/10.30518/jav.450913

Abstract

References

  • [1] M. Radovic, O. Adarkwa, and Q. Wang, “Object Recognition in Aerial Images Using Convolutional Neural Networks,” J. Imaging, vol. 3, no. 2, p. 21, 2017.
  • [2] L. Li, L., Fan, Y., Huang, X., & Tian, “Real-time UAV weed scout for selective weed control by adaptive robust control and machine learning algorithm,” Am. Soc. Agric. Biol. Eng. Annu. Int. Meet. ASABE, 2016.
  • [3] C. Hung, Z. Xu, and S. Sukkarieh, “Feature learning based approach for weed classification using high resolution aerial images from a digital camera mounted on a UAV,” Remote Sens., vol. 6, no. 12, pp. 12037–12054, 2014.
  • [4] P. Zarjam, J. Epps, F. Chen, and N. H. Lovell, “Estimating cognitive workload using wavelet entropy-based features during an arithmetic task,” Comput. Biol. Med., vol. 43, no. 12, pp. 2186–1295, 2013.
  • [5] S. W. Chen, S. S. Shivakumar, S. Dcunha, J. Das, E. Okon, C. Qu, C. J. Taylor, and V. Kumar, “Counting Apples and Oranges With Deep Learning: A Data-Driven Approach,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 781–788, 2017.
  • [6] W. Li, H. Fu, L. Yu, and A. Cracknell, “Deep Learning Based Oil Palm Tree Detection and Counting for High-Resolution Remote Sensing Images,” Remote Sens., vol. 9, no. 1, p. 22, 2016.
  • [7] N. V. Kim and M. A. Chervonenkis, “Situation control of unmanned aerial vehicles for road traffic monitoring,” Mod. Appl. Sci., vol. 9, no. 5, pp. 1–13, 2015.
  • [8] M. B. Bejiga, A. Zeggada, A. Nouffidj, and F. Melgani, “A convolutional neural network approach for assisting avalanche search and rescue operations with UAV imagery,” Remote Sens., vol. 9, no. 2, 2017.
  • [9] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-Learning-Detection.,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 7, pp. 1409–1422, 2011.
  • [10] F. De Smedt, D. Hulens, and T. Goedeme, “On-board real-time tracking of pedestrians on a UAV,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol. 2015–October, pp. 1–8, 2015.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Adv. Neural Inf. Process. Syst., pp. 1–9, 2012.
  • [12] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” pp. 1–14, 2014.
  • [13] A. Kaya, A. S. Keçeli, and A. B. Can, “Akciğer nodül özelliklerinin tahmininde çeşitli sınıflama stratejilerinin incelenmesi,” Gazi Üniversitesi Mimar. Mühendislik Fakültesi Derg. (2018), https//doi.or./10.17341/gazimmfd.416530.
  • [14] S. S. A. Robicquet, A. Sadeghian, A. Alahi, “Learning Social Etiquette: Human Trajectory Prediction In Crowded Scenes,” in European Conference on Computer Vision (ECCV), 2016.
  • [15] “Stanford Drone Dataset,” 2016. [Online]. Available: http://cvgl.stanford.edu/projects/uav_data/. [Accessed: 20-Jul-2018].
  • [16] Y. Zhou, H. Nejati, T.-T. Do, N.-M. Cheung, and L. Cheah, “Image-based Vehicle Analysis using Deep Neural Network: A Systematic Study,” 2016.
  • [17] A. Khazaee and A. Ebrahimzadeh, “Classification of electrocardiogram signals with support vector machines and genetic algorithms using power spectral features,” Biomed. Signal Process. Control, vol. 5, no. 4, pp. 252–263, 2010.
  • [18] A. Vedaldi and A. Zisserman, “Efficient Additive Kernels via Explicit Feature Maps,” Proc. {CVPR}, vol. 34, no. 3, pp. 480–492, 2010.
  • [19] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, “LIBLINEAR: A Library for Large Linear Classification,” J. Mach. Learn. Res., vol. 9, no. 2008, pp. 1871–1874, 2008.
There are 19 citations in total.

Details

Primary Language Turkish
Journal Section Research Articles
Authors

Suat Toraman

Publication Date December 23, 2018
Submission Date August 4, 2018
Acceptance Date October 31, 2018
Published in Issue Year 2018

Cite

APA Toraman, S. (2018). Derin Öğrenme ile İnsansız Hava Aracı Görüntülerinden Yaya Tespiti. Journal of Aviation, 2(2), 64-69. https://doi.org/10.30518/jav.450913

Journal of Aviation - JAV 


www.javsci.com - editor@javsci.com


9210This journal is licenced under a Creative Commons Attiribution-NonCommerical 4.0 İnternational Licence