Araştırma Makalesi
BibTex RIS Kaynak Göster

Classification of Vehicle Types Using İMobileNet CNN Approaches and Feature Selection Methods

Yıl 2021, Cilt: 25 Sayı: 3, 618 - 628, 30.12.2021
https://doi.org/10.19113/sdufenbed.889715

Öz

Nowadays, the density of vehicles in traffic life has reached serious levels. Therefore, the utilization capacity of existing transportation networks increases to maximum levels and leads to traffic congestion. Visual Traffic Surveillance Systems, a solution of Intelligent Transportation Systems, is one of the alternative method used to reduce traffic congestion. One of the main tasks of the Visual Traffic Surveillance System is to correctly classify the types of vehicles detected from video or images. This study aims to present new methods that will improve the accuracy of the visual Traffic Surveillance System in classifying vehicle types. While traditional methods are used in most studies that increase image classification accuracy, today's trend mobile convolutional neural networks (MCNN) are considered with two different approaches in this study. Firstly, the MobileNetv1 and MobileNetv2 models were optimized and the İMobileNetv1 and İMobileNetv2 approaches were proposed. Second, the proposed MCNN approaches were used only as feature extractors. An approach that uses methods such as combining, selecting, and classifying the features obtained from these approaches was proposed. As a result of the classification made with the proposed approaches, a very high classification success rate of 85.05% has been achieved.

Kaynakça

  • [1] “Registrations Or Sales Of New Vehicles - All Types,” 2019, p. 6.
  • [2] M. Won, T. Park, and S. H. Son, “Toward Mitigating Phantom Jam Using Vehicle-to-Vehicle Communication,” IEEE Trans. Intell. Transp. Syst., vol. 18, no. 5, pp. 1313–1324, May 2017, doi: 10.1109/TITS.2016.2605925.
  • [3] Federal Highway Administration, The 2016 Traffic Monitoring Guide, no. October. .
  • [4] M. Won, S. Sahu, and K. J. Park, “DeepWiTraffic: Low cost WiFi-based traffic monitoring system using deep learning,” Proc. - 2019 IEEE 16th Int. Conf. Mob. Ad Hoc Smart Syst. MASS 2019, pp. 476–484, 2019, doi: 10.1109/MASS.2019.00062.
  • [5] H. Lee and B. Coifman, “Using LIDAR to Validate the Performance of Vehicle Classification Stations,” J. Intell. Transp. Syst. Technol. Planning, Oper., vol. 19, no. 4, pp. 355–369, 2015, doi: 10.1080/15472450.2014.941750.
  • [6] M. Won, “Intelligent Traffic Monitoring Systems for Vehicle Classification: A Survey,” IEEE Access, vol. 8, pp. 73340–73358, 2020, doi: 10.1109/ACCESS.2020.2987634.
  • [7] W. Chu, Y. Liu, C. Shen, D. Cai, and X. Hua, “Multi-Task Vehicle Detection With Region-of-Interest Voting,” vol. 27, no. 1, pp. 432–441, 2018.
  • [8] X. Hu et al., “SINet: A scale-insensitive convolutional neural network for fast vehicle detection,” arXiv, vol. 20, no. 3, pp. 1010–1019, 2018, doi: 10.22214/ijraset.2019.6296.
  • [9] H. Tehrani Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 2, pp. 748–758, 2012, doi: 10.1109/TITS.2012.2187894.
  • [10] J. Wang, B. Cao, P. Yu, L. Sun, W. Bao, and X. Zhu, “Deep learning towards mobile applications,” Proc. - Int. Conf. Distrib. Comput. Syst., vol. 2018-July, pp. 1385–1393, 2018, doi: 10.1109/ICDCS.2018.00139.
  • [11] A. G. Howard et al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv, 2017.
  • [12] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018, doi: 10.1109/CVPR.2018.00474.
  • [13] A. S. Winoto, M. Kristianus, and C. Premachandra, “Small and Slim Deep Convolutional Neural Network for Mobile Device,” IEEE Access, vol. 8, pp. 125210–125222, 2020, doi: 10.1109/ACCESS.2020.3005161.
  • [14] S. H. Lee, M. Bang, K. H. Jung, and K. Yi, “An efficient selection of HOG feature for SVM classification of vehicle,” Proc. Int. Symp. Consum. Electron. ISCE, vol. 2015-Augus, pp. 14–15, 2015, doi: 10.1109/ISCE.2015.7177766.
  • [15] M. A. Manzoor and Y. Morgan, “Vehicle Make and Model classification system using bag of SIFT features,” 2017 IEEE 7th Annu. Comput. Commun. Work. Conf. CCWC 2017, 2017, doi: 10.1109/CCWC.2017.7868475.
  • [16] M. Cheon, W. Lee, C. Yoon, and M. Park, “Vision-Based Vehicle Detection System With Consideration of the Detecting Location,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 3, pp. 1243–1252, 2012, doi: 10.1109/tits.2012.2188630.
  • [17] Z. Kim, “Realtime obstacle detection and tracking based on constrained delaunay triangulation,” IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC, pp. 548–553, 2006, doi: 10.1109/itsc.2006.1706798.
  • [18] Y. Zhang, S. J. Kiselewich, and W. A. Bauson, “Legendre and gabor moments for vehicle recognition in forward collision warning,” IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC, pp. 1185–1190, 2006, doi: 10.1109/itsc.2006.1707383.
  • [19] B. Zhang, “Reliable classification of vehicle types based on cascade classifier ensembles,” IEEE Trans. Intell. Transp. Syst., vol. 14, no. 1, pp. 322–332, 2013, doi: 10.1109/TITS.2012.2213814.
  • [20] A. Psyllos, C. N. Anagnostopoulos, and E. Kayafas, “Vehicle model recognition from frontal view image measurements,” Comput. Stand. Interfaces, vol. 33, no. 2, pp. 142–151, 2011, doi: 10.1016/j.csi.2010.06.005.
  • [21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2323, 1998, doi: 10.1109/5.726791.
  • [22] H. Huttunen, F. S. Yancheshmeh, and C. Ke, “Car type recognition with Deep Neural Networks,” IEEE Intell. Veh. Symp. Proc., vol. 2016-August, no. Iv, pp. 1115–1120, 2016, doi: 10.1109/IVS.2016.7535529.
  • [23] M. Kafai and B. Bhanu, “Dynamic bayesian networks for vehicle classification in video,” IEEE Trans. Ind. Informatics, vol. 8, no. 1, pp. 100–109, 2012, doi: 10.1109/TII.2011.2173203.
  • [24] B. Zhang, Y. Zhou, and H. Pan, “Vehicle classification with confidence by classified vector quantization,” IEEE Intell. Transp. Syst. Mag., vol. 5, no. 3, pp. 8–20, 2013, doi: 10.1109/MITS.2013.2245725.
  • [25] W. Liu, M. Zhang, Z. Luo, and Y. Cai, “An Ensemble Deep Learning Method for Vehicle Type Classification on Visual Traffic Surveillance Sensors,” IEEE Access, vol. 5, pp. 24417–24425, 2017, doi: 10.1109/ACCESS.2017.2766203.
  • [26] S. L. Rabano, M. K. Cabatuan, E. Sybingco, E. P. Dadios, and E. J. Calilung, “Common garbage classification using mobilenet,” 2018 IEEE 10th Int. Conf. Humanoid, Nanotechnology, Inf. Technol. Commun. Control. Environ. Manag. HNICEM 2018, pp. 18–21, 2018, doi: 10.1109/HNICEM.2018.8666300.
  • [27] C. Bi, J. Wang, Y. Duan, B. Fu, J. R. Kang, and Y. Shi, “MobileNet Based Apple Leaf Diseases Identification,” Mob. Networks Appl., 2020, doi: 10.1007/s11036-020-01640-1.
  • [28] S. Taufiqurrahman, “Diabetic Retinopathy Classification Using A Hybrid and Efficient MobileNetV2-SVM Model,” 2020.
  • [29] M. M. Ahsan, K. D. Gupta, M. M. Islam, S. Sen, M. L. Rahman, and M. S. Hossain, “Study of different deep learning approach with explainable AI for screening patients with covid-19 symptoms: Using CT scan and chest X-ray image dataset,” arXiv, 2020, doi: 10.3390/make2040027.
  • [30] M. S. Boudrioua, “COVID-19 Detection from Chest X-Ray Images Using CNNs Models: Further Evidence from Deep Transfer Learning,” SSRN Electron. J., 2020, doi: 10.2139/ssrn.3630150.
  • [31] Y. Y. BAYDİLLİ, “Polen Taşıyan Bal Arılarının MobileNetV2 Mimarisi ile Sınıflandırılması,” Eur. J. Sci. Technol., no. 21, pp. 527–533, 2021, doi: 10.31590/ejosat.836856.
  • [32] Sandeep, “Vehicle Dataset.”, 2020, url: https://www.kaggle.com/iamsandeepprasad/vehicle-data-set .
  • [33] M. Toğaçar, B. Ergen, and Z. Cömert, “Classification of flower species by using features extracted from the intersection of feature selection methods in convolutional neural network models,” Meas. J. Int. Meas. Confed., vol. 158, 2020, doi: 10.1016/j.measurement.2020.107703.
  • [34] Y. Wang, L. Sun, Y. Zhang, D. Lv, Z. Li, and W. Qi, “An adaptive enhancement based hybrid cnn model for digital dental x-ray positions classification,” arXiv, pp. 1–9, 2020.
  • [35] A. Huo, W. Zhang, and Y. Li, “Traffic Sign Recognition Based on Improved SSD Model,” pp. 54–58, 2020, doi: 10.1109/iccnea50255.2020.00021.
  • [36] R. Patel and A. Chaware, “Transfer learning with fine-tuned MobileNetV2 for diabetic retinopathy,” 2020 Int. Conf. Emerg. Technol. INCET 2020, pp. 7–10, 2020, doi: 10.1109/INCET49848.2020.9154014.
  • [37] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “Training algorithm for optimal margin classifiers,” Proc. Fifth Annu. ACM Work. Comput. Learn. Theory, no. October 2015, pp. 144–152, 1992, doi: 10.1145/130385.130401.
  • [38] G. Anthony, H. Gregg, and M. Tshilidzi, “Image classification using SVMs: One-Against-One Vs One-against-All,” 28th Asian Conf. Remote Sens. 2007, ACRS 2007, vol. 2, pp. 801–806, 2007.
  • [39] Y. I. A. Rejani and S. T. Selvi, “Early Detection of Breast Cancer using SVM Classifier Technique,” vol. 1, no. 3, pp. 127–130, 2009.
  • [40] S. Dhakshina Kumar, S. Esakkirajan, S. Bama, and B. Keerthiveena, “A microcontroller based machine vision approach for tomato grading and sorting using SVM classifier,” Microprocess. Microsyst., vol. 76, p. 103090, 2020, doi: 10.1016/j.micpro.2020.103090.
  • [41] S. Han, Q. Cao, and M. Han, “Parameter selection in SVM with RBF kernel function,” World Autom. Congr. Proc., 2012.
  • [42] V. Bolón-Canedo and B. Remeseiro, “Feature selection in image analysis: a survey,” Artif. Intell. Rev., vol. 53, no. 4, pp. 2905–2931, 2020, doi: 10.1007/s10462-019-09750-3.
  • [43] A. Kraskov, H. Stögbauer, and P. Grassberger, “Estimating mutual information,” Phys. Rev. E - Stat. Physics, Plasmas, Fluids, Relat. Interdiscip. Top., vol. 69, no. 6, p. 16, 2004, doi: 10.1103/PhysRevE.69.066138.
  • [44] T. Zhang, “Solving large scale linear prediction problems using stochastic gradient descent algorithms,” in Twenty-first international conference on Machine learning - ICML ’04, 2004, vol. 6, p. 116, doi: 10.1145/1015330.1015332.
  • [45] K. Crammer, “On the algorithmic implementation of multiclass kernel-based vector machines,” J. Mach. Learn. Res. - JMLR, vol. 2, no. 2, pp. 265–292, 2002.
  • [46] Sklearn, “Feature selection using Select From Model,” 2021.
  • [47] D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–15, 2015.
  • [48] D. M. W. Powers, “Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation,” no. January 2008, 2020.
  • [49] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett., vol. 27, no. 8, pp. 861–874, 2006, doi: 10.1016/j.patrec.2005.10.010.

İMobileNet CNN Yaklaşımları ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma

Yıl 2021, Cilt: 25 Sayı: 3, 618 - 628, 30.12.2021
https://doi.org/10.19113/sdufenbed.889715

Öz

Günümüzde, trafik hayatında seyreden araç yoğunluğu ciddi boyutlara ulaşmıştır. Bu nedenle, mevcut ulaşım ağlarının kullanım kapasitesi maksimum seviyelere çıkmakta ve trafik sıkışıklığına yol açmaktadır. Akıllı Ulaşım Sistemlerinin bir çözümü olan Görsel Trafik Gözetleme Sistemleri trafik sıkışıklığını azaltmak için kullanılan alternatif yöntemlerden biridir. Görsel Trafik Gözetleme Sisteminin temel görevlerinden biri; video veya görüntülerden algılanan araç türlerini doğru bir şekilde sınıflandırmaktır. Bu çalışma, Görsel Trafik Gözetleme Sisteminin araç türlerini sınıflandırma doğruluğunu arttıracak yeni yöntemler sunmayı amaçlamaktadır. Çoğu görüntü sınıflandırma doğruluğunu arttıran çalışmalarda geleneksel yöntemler kullanılırken bu çalışmada günümüzde trend olan mobil evrişimli sinir ağları (MCNN) iki farklı yaklaşımla ele alınmaktadır. İlk olarak, MobileNetv1 ve MobileNetv2 modelleri optimize edilerek İMobileNetv1 ve İMobileNetv2 yaklaşımları önerildi. İkinci olarak, bu önerilen MCNN yaklaşımları sadece özellik çıkarıcı olarak kullanıldığı ve elde edilen özelliklerin birleştirilmesi, seçilmesi ve sınıflandırılması gibi yöntemlerin birlikte kullanıldığı bir yaklaşım önerildi. Önerilen yaklaşımlarla yapılan sınıflandırma sonucunda, %85,05 oranında çok yüksek bir sınıflandırma başarısı elde edilmiştir.

Kaynakça

  • [1] “Registrations Or Sales Of New Vehicles - All Types,” 2019, p. 6.
  • [2] M. Won, T. Park, and S. H. Son, “Toward Mitigating Phantom Jam Using Vehicle-to-Vehicle Communication,” IEEE Trans. Intell. Transp. Syst., vol. 18, no. 5, pp. 1313–1324, May 2017, doi: 10.1109/TITS.2016.2605925.
  • [3] Federal Highway Administration, The 2016 Traffic Monitoring Guide, no. October. .
  • [4] M. Won, S. Sahu, and K. J. Park, “DeepWiTraffic: Low cost WiFi-based traffic monitoring system using deep learning,” Proc. - 2019 IEEE 16th Int. Conf. Mob. Ad Hoc Smart Syst. MASS 2019, pp. 476–484, 2019, doi: 10.1109/MASS.2019.00062.
  • [5] H. Lee and B. Coifman, “Using LIDAR to Validate the Performance of Vehicle Classification Stations,” J. Intell. Transp. Syst. Technol. Planning, Oper., vol. 19, no. 4, pp. 355–369, 2015, doi: 10.1080/15472450.2014.941750.
  • [6] M. Won, “Intelligent Traffic Monitoring Systems for Vehicle Classification: A Survey,” IEEE Access, vol. 8, pp. 73340–73358, 2020, doi: 10.1109/ACCESS.2020.2987634.
  • [7] W. Chu, Y. Liu, C. Shen, D. Cai, and X. Hua, “Multi-Task Vehicle Detection With Region-of-Interest Voting,” vol. 27, no. 1, pp. 432–441, 2018.
  • [8] X. Hu et al., “SINet: A scale-insensitive convolutional neural network for fast vehicle detection,” arXiv, vol. 20, no. 3, pp. 1010–1019, 2018, doi: 10.22214/ijraset.2019.6296.
  • [9] H. Tehrani Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 2, pp. 748–758, 2012, doi: 10.1109/TITS.2012.2187894.
  • [10] J. Wang, B. Cao, P. Yu, L. Sun, W. Bao, and X. Zhu, “Deep learning towards mobile applications,” Proc. - Int. Conf. Distrib. Comput. Syst., vol. 2018-July, pp. 1385–1393, 2018, doi: 10.1109/ICDCS.2018.00139.
  • [11] A. G. Howard et al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv, 2017.
  • [12] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018, doi: 10.1109/CVPR.2018.00474.
  • [13] A. S. Winoto, M. Kristianus, and C. Premachandra, “Small and Slim Deep Convolutional Neural Network for Mobile Device,” IEEE Access, vol. 8, pp. 125210–125222, 2020, doi: 10.1109/ACCESS.2020.3005161.
  • [14] S. H. Lee, M. Bang, K. H. Jung, and K. Yi, “An efficient selection of HOG feature for SVM classification of vehicle,” Proc. Int. Symp. Consum. Electron. ISCE, vol. 2015-Augus, pp. 14–15, 2015, doi: 10.1109/ISCE.2015.7177766.
  • [15] M. A. Manzoor and Y. Morgan, “Vehicle Make and Model classification system using bag of SIFT features,” 2017 IEEE 7th Annu. Comput. Commun. Work. Conf. CCWC 2017, 2017, doi: 10.1109/CCWC.2017.7868475.
  • [16] M. Cheon, W. Lee, C. Yoon, and M. Park, “Vision-Based Vehicle Detection System With Consideration of the Detecting Location,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 3, pp. 1243–1252, 2012, doi: 10.1109/tits.2012.2188630.
  • [17] Z. Kim, “Realtime obstacle detection and tracking based on constrained delaunay triangulation,” IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC, pp. 548–553, 2006, doi: 10.1109/itsc.2006.1706798.
  • [18] Y. Zhang, S. J. Kiselewich, and W. A. Bauson, “Legendre and gabor moments for vehicle recognition in forward collision warning,” IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC, pp. 1185–1190, 2006, doi: 10.1109/itsc.2006.1707383.
  • [19] B. Zhang, “Reliable classification of vehicle types based on cascade classifier ensembles,” IEEE Trans. Intell. Transp. Syst., vol. 14, no. 1, pp. 322–332, 2013, doi: 10.1109/TITS.2012.2213814.
  • [20] A. Psyllos, C. N. Anagnostopoulos, and E. Kayafas, “Vehicle model recognition from frontal view image measurements,” Comput. Stand. Interfaces, vol. 33, no. 2, pp. 142–151, 2011, doi: 10.1016/j.csi.2010.06.005.
  • [21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2323, 1998, doi: 10.1109/5.726791.
  • [22] H. Huttunen, F. S. Yancheshmeh, and C. Ke, “Car type recognition with Deep Neural Networks,” IEEE Intell. Veh. Symp. Proc., vol. 2016-August, no. Iv, pp. 1115–1120, 2016, doi: 10.1109/IVS.2016.7535529.
  • [23] M. Kafai and B. Bhanu, “Dynamic bayesian networks for vehicle classification in video,” IEEE Trans. Ind. Informatics, vol. 8, no. 1, pp. 100–109, 2012, doi: 10.1109/TII.2011.2173203.
  • [24] B. Zhang, Y. Zhou, and H. Pan, “Vehicle classification with confidence by classified vector quantization,” IEEE Intell. Transp. Syst. Mag., vol. 5, no. 3, pp. 8–20, 2013, doi: 10.1109/MITS.2013.2245725.
  • [25] W. Liu, M. Zhang, Z. Luo, and Y. Cai, “An Ensemble Deep Learning Method for Vehicle Type Classification on Visual Traffic Surveillance Sensors,” IEEE Access, vol. 5, pp. 24417–24425, 2017, doi: 10.1109/ACCESS.2017.2766203.
  • [26] S. L. Rabano, M. K. Cabatuan, E. Sybingco, E. P. Dadios, and E. J. Calilung, “Common garbage classification using mobilenet,” 2018 IEEE 10th Int. Conf. Humanoid, Nanotechnology, Inf. Technol. Commun. Control. Environ. Manag. HNICEM 2018, pp. 18–21, 2018, doi: 10.1109/HNICEM.2018.8666300.
  • [27] C. Bi, J. Wang, Y. Duan, B. Fu, J. R. Kang, and Y. Shi, “MobileNet Based Apple Leaf Diseases Identification,” Mob. Networks Appl., 2020, doi: 10.1007/s11036-020-01640-1.
  • [28] S. Taufiqurrahman, “Diabetic Retinopathy Classification Using A Hybrid and Efficient MobileNetV2-SVM Model,” 2020.
  • [29] M. M. Ahsan, K. D. Gupta, M. M. Islam, S. Sen, M. L. Rahman, and M. S. Hossain, “Study of different deep learning approach with explainable AI for screening patients with covid-19 symptoms: Using CT scan and chest X-ray image dataset,” arXiv, 2020, doi: 10.3390/make2040027.
  • [30] M. S. Boudrioua, “COVID-19 Detection from Chest X-Ray Images Using CNNs Models: Further Evidence from Deep Transfer Learning,” SSRN Electron. J., 2020, doi: 10.2139/ssrn.3630150.
  • [31] Y. Y. BAYDİLLİ, “Polen Taşıyan Bal Arılarının MobileNetV2 Mimarisi ile Sınıflandırılması,” Eur. J. Sci. Technol., no. 21, pp. 527–533, 2021, doi: 10.31590/ejosat.836856.
  • [32] Sandeep, “Vehicle Dataset.”, 2020, url: https://www.kaggle.com/iamsandeepprasad/vehicle-data-set .
  • [33] M. Toğaçar, B. Ergen, and Z. Cömert, “Classification of flower species by using features extracted from the intersection of feature selection methods in convolutional neural network models,” Meas. J. Int. Meas. Confed., vol. 158, 2020, doi: 10.1016/j.measurement.2020.107703.
  • [34] Y. Wang, L. Sun, Y. Zhang, D. Lv, Z. Li, and W. Qi, “An adaptive enhancement based hybrid cnn model for digital dental x-ray positions classification,” arXiv, pp. 1–9, 2020.
  • [35] A. Huo, W. Zhang, and Y. Li, “Traffic Sign Recognition Based on Improved SSD Model,” pp. 54–58, 2020, doi: 10.1109/iccnea50255.2020.00021.
  • [36] R. Patel and A. Chaware, “Transfer learning with fine-tuned MobileNetV2 for diabetic retinopathy,” 2020 Int. Conf. Emerg. Technol. INCET 2020, pp. 7–10, 2020, doi: 10.1109/INCET49848.2020.9154014.
  • [37] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “Training algorithm for optimal margin classifiers,” Proc. Fifth Annu. ACM Work. Comput. Learn. Theory, no. October 2015, pp. 144–152, 1992, doi: 10.1145/130385.130401.
  • [38] G. Anthony, H. Gregg, and M. Tshilidzi, “Image classification using SVMs: One-Against-One Vs One-against-All,” 28th Asian Conf. Remote Sens. 2007, ACRS 2007, vol. 2, pp. 801–806, 2007.
  • [39] Y. I. A. Rejani and S. T. Selvi, “Early Detection of Breast Cancer using SVM Classifier Technique,” vol. 1, no. 3, pp. 127–130, 2009.
  • [40] S. Dhakshina Kumar, S. Esakkirajan, S. Bama, and B. Keerthiveena, “A microcontroller based machine vision approach for tomato grading and sorting using SVM classifier,” Microprocess. Microsyst., vol. 76, p. 103090, 2020, doi: 10.1016/j.micpro.2020.103090.
  • [41] S. Han, Q. Cao, and M. Han, “Parameter selection in SVM with RBF kernel function,” World Autom. Congr. Proc., 2012.
  • [42] V. Bolón-Canedo and B. Remeseiro, “Feature selection in image analysis: a survey,” Artif. Intell. Rev., vol. 53, no. 4, pp. 2905–2931, 2020, doi: 10.1007/s10462-019-09750-3.
  • [43] A. Kraskov, H. Stögbauer, and P. Grassberger, “Estimating mutual information,” Phys. Rev. E - Stat. Physics, Plasmas, Fluids, Relat. Interdiscip. Top., vol. 69, no. 6, p. 16, 2004, doi: 10.1103/PhysRevE.69.066138.
  • [44] T. Zhang, “Solving large scale linear prediction problems using stochastic gradient descent algorithms,” in Twenty-first international conference on Machine learning - ICML ’04, 2004, vol. 6, p. 116, doi: 10.1145/1015330.1015332.
  • [45] K. Crammer, “On the algorithmic implementation of multiclass kernel-based vector machines,” J. Mach. Learn. Res. - JMLR, vol. 2, no. 2, pp. 265–292, 2002.
  • [46] Sklearn, “Feature selection using Select From Model,” 2021.
  • [47] D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–15, 2015.
  • [48] D. M. W. Powers, “Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation,” no. January 2008, 2020.
  • [49] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett., vol. 27, no. 8, pp. 861–874, 2006, doi: 10.1016/j.patrec.2005.10.010.
Toplam 49 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Gürkan Doğan 0000-0003-2497-8348

Burhan Ergen 0000-0003-3244-2615

Yayımlanma Tarihi 30 Aralık 2021
Yayımlandığı Sayı Yıl 2021 Cilt: 25 Sayı: 3

Kaynak Göster

APA Doğan, G., & Ergen, B. (2021). İMobileNet CNN Yaklaşımları ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 25(3), 618-628. https://doi.org/10.19113/sdufenbed.889715
AMA Doğan G, Ergen B. İMobileNet CNN Yaklaşımları ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma. SDÜ Fen Bil Enst Der. Aralık 2021;25(3):618-628. doi:10.19113/sdufenbed.889715
Chicago Doğan, Gürkan, ve Burhan Ergen. “İMobileNet CNN Yaklaşımları Ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 25, sy. 3 (Aralık 2021): 618-28. https://doi.org/10.19113/sdufenbed.889715.
EndNote Doğan G, Ergen B (01 Aralık 2021) İMobileNet CNN Yaklaşımları ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 25 3 618–628.
IEEE G. Doğan ve B. Ergen, “İMobileNet CNN Yaklaşımları ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma”, SDÜ Fen Bil Enst Der, c. 25, sy. 3, ss. 618–628, 2021, doi: 10.19113/sdufenbed.889715.
ISNAD Doğan, Gürkan - Ergen, Burhan. “İMobileNet CNN Yaklaşımları Ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 25/3 (Aralık 2021), 618-628. https://doi.org/10.19113/sdufenbed.889715.
JAMA Doğan G, Ergen B. İMobileNet CNN Yaklaşımları ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma. SDÜ Fen Bil Enst Der. 2021;25:618–628.
MLA Doğan, Gürkan ve Burhan Ergen. “İMobileNet CNN Yaklaşımları Ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi, c. 25, sy. 3, 2021, ss. 618-2, doi:10.19113/sdufenbed.889715.
Vancouver Doğan G, Ergen B. İMobileNet CNN Yaklaşımları ve Özellik Seçme Yöntemleri Kullanarak Araç Türlerini Sınıflandırma. SDÜ Fen Bil Enst Der. 2021;25(3):618-2.

e-ISSN: 1308-6529