Araştırma Makalesi
BibTex RIS Kaynak Göster

Otonom Araçların Görsel Eğitimi için EEG, EMG ve IMU ile Etiketleme Sistemi

Yıl 2019, , 299 - 305, 29.10.2019
https://doi.org/10.17671/gazibtd.542662

Öz

Otonom araçlar, çevrelerini algılayarak kararlar alan ve bu kararlar ile hareket eden araçlardır. Günümüzde otonom araçlar bazı ülkelerde trafikte de kullanılmaktadır. Otonom araçlarda çevre algılama için çeşitli kameralar, lazer radarlar (LIDAR), sonar sensörler vb. pek çok sensör kullanılmaktadır. Çevre algılandıktan sonra toplanan veri makine öğrenmesi yöntemleri yardımıyla araca öğretilmekte ve araç trafik kurallarına uyarak hedefe ulaşmaktadır. Trafik kuralları noktasında en büyük görev görüntü tabanlı sistemlere düşmektedir. Ancak ideal trafik koşulları ve çevre şartları her zaman sağlanamamaktadır. Bu nedenle otonom araçlar için tehlike oluşturabilecek durumların tespiti önem arz etmektedir. Literatür incelendiğinde tehlikeli durumların etiketli bulunduğu görsel bir veri seti veya ilgili bir bilimsel çalışmaya rastlanmamıştır. Bu çalışmada literatürdeki açığı gidermek için bir veri toplama ve etiketleme sistemi tasarlanması amaçlanmıştır. Amaç doğrultusunda tasarlanan sistemde insan sürüşü esnasında sürücünün fizyolojik verisi (EEG ve EMG) ve eylemsizlik değişim verilerinden otomatik olarak video etiketi oluşturan bir sistem tasarlanmıştır. Bunun için öncelikle deneyler ile sensör sinyalleri toplanmıştır. Toplanan sinyallerden 0.33 sn uzunluğunda üst üste binmeyen kayan pencere kullanılarak zaman ve frekans alanında öznitelikler çıkarılmıştır. Elde edilen veri setindeki giriş değişkenleri Temel Bileşen Analizi (PCA) ile indirgenmiş ve Karar Ağacı (DT), Rastgele Ağaç (RF) ve K En Yakın Komşular (K-NN) algoritmaları ile sınıflandırma işlemine tutulmuştur. Bulgulara göre K-NN yönteminin 0.922 doğrulukla tehlikeli, tehlikesiz durumları ayırt ederek denenen algoritmalar arasında en başarılı algoritma olduğu tespit edilmiştir.

Kaynakça

  • B. Siciliano, O. Khatib, Springer Handbook of Robotics, Springer, 2016.
  • D. Nistér, O. Naroditsky, J. Bergen, “Visual Odometry for Ground Vehicle Applications”, Journal of Field Robotics, 23(1), 3–20, 2006.
  • M. O. Aqel, M. H. Marhaban, M. I. Saripan, N. B. Ismail, “Review of visual odometry: types, approaches, challenges, and applications”, SpringerPlus, 5(1), 1897, 2016.
  • H. Lum, J. A. Reagan, “Interactive highway safety design model: accident predictive module”, Public Roads, 58(3), 1995.
  • T. Bliss, Implementing the recommendations of the world report on road traffic injury prevention, 2004.
  • M. Peden et al., World report on road traffic injury prevention, World Health Organization Geneva, 2004.
  • N. J. Goodall, “Can you program ethics into a self-driving car?”, IEEE Spectrum, 53(6), 28–58, 2016.
  • D. Birnbacher, W. Birnbacher, “Fully autonomous driving: Where technology and ethics meet”, IEEE Intelligent Systems, 32(5), 3–4, 2017.
  • N. A. Greenblatt, “Self-driving cars and the law”, IEEE spectrum, 53(2), 46–51,2016.
  • J. Fleetwood, “Public health, ethics, and autonomous vehicles”, American journal of public health, 107(4), 532–537,2017.
  • K. Sadeghi, A. Banerjee, J. Sohankar, S. K. Gupta, “Safedrive: An autonomous driver safety application in aware cities”, Pervasive Computing and Communication Workshops (PerCom Workshops), 2016 IEEE International Conference on, 1–6, 2016.
  • C. Richter, N. Roy, “Safe visual navigation via deep learning and novelty detection”, 2017.
  • D. Gruyer, V. Magnier, K. Hamdi, L. Claussmann, O. Orfila, A. Rakotonirainy, “Perception, information processing and modeling: Critical stages for autonomous driving applications”, Annual Reviews in Control, 44, 323–341,2017.
  • N. Smolyanskiy, A. Kamenev, J. D. Smith, S. T. Birchfield, “Performing autonomous path navigation using deep neural networks”, Oct-2018.
  • L. Chi, Y. Mu, “Deep steering: Learning end-to-end driving model from spatial and temporal visual cues”, arXiv preprint arXiv:1708.03798, 2017.
  • A. Bemporad, A. De Luca, G. Oriolo, “Local incremental planning for a car-like robot navigating among obstacles”, Proceedings of IEEE International Conference on Robotics and Automation, 2, 1205–1211, 1996.
  • J. Liu, P. Jayakumar, J. L. Stein, T. Ersal, “A study on model fidelity for model predictive control-based obstacle avoidance in high-speed autonomous ground vehicles”, Vehicle System Dynamics, 54(11), 1629–1650,2016.
  • N. Nourani-Vatani, J. Roberts, M. V. Srinivasan, “Practical visual odometry for car-like vehicles”, in 2009 IEEE International Conference on Robotics and Automation, 3551–3557, 2009.
  • F. You, R. Zhang, G. Lie, H. Wang, H. Wen, J. Xu, “Trajectory planning and tracking control for autonomous lane change maneuver based on the cooperative vehicle infrastructure system”, Expert Systems with Applications, 42(14), 5932–5946,2015.
  • J. Sattar, J. Mo, “SafeDrive: A Robust Lane Tracking System for Autonomous and Assisted Driving Under Limited Visibility”, arXiv preprint arXiv:1701.08449, 2017.
  • M. Cordts et al., “The cityscapes dataset for semantic urban scene understanding”, Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223, 2016.
  • W. Maddern, G. Pascoe, C. Linegar, P. Newman, “1 year, 1000 km: The Oxford RobotCar dataset”, The International Journal of Robotics Research, 36(1), 3–15,2017.
  • F. Yu et al., “BDD100K: A diverse driving video database with scalable annotation tooling”, arXiv preprint arXiv:1805.04687, 2018.
  • A. Geiger, P. Lenz, R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite”, IEEE Conference on Computer Vision and Pattern Recognition, 3354–3361, 2012.
  • A. Geiger, P. Lenz, C. Stiller, R. Urtasun, “Vision meets robotics: The KITTI dataset”, The International Journal of Robotics Research, 32(11), 1231–1237,2013.
  • E. S. Watkins, “The physiology and pathology of formula one Grand Prix motor racing”, Clin Neurosurg, 53, 145–152, 2006.
  • M. Quigg, M. Quigg, EEG pearls, Mosby Elsevier, 2006.
  • Internet: “EEG Algorithms | NeuroSky” . http://neurosky.com/biosensors/eeg-sensor/algorithms/
  • A. Sezer, Y. İnel, A. Ç. Seçkin, U. Uluçınar, “An Investigation of University Students’ Attention Levels in Real Classroom Settings with NeuroSky’s MindWave Mobile (EEG) Device”, International Educational Technology Conference 2015, 88–101, 2015.
  • A. Sezer, Y. İnel, A. Ç. Seçkin, U. Uluçınar, “The Relationship between Attention Levels and Class Participation of First-Year Students in Classroom Teaching Departments”, International Journal of Instruction, 10(2), 55–68,2017.
  • F. Bozkurt, A. Ç. Seçkin, A. Coşkun, “Integration of IMU Sensor on Low-Cost EEG and Design of Cursor Control System with ANFIS”, International Journal of Engineering Trends and Technology, 54(3), 162–169,2017.
  • K. Patel, H. Shah, M. Dcosta, D. Shastri, “Evaluating NeuroSky’s Single-Channel EEG Sensor for Drowsiness Detection”, HCI International 2017 – Posters’ Extended Abstracts, 243–250, 2017.
  • P. D. Girase, M. P. Deshmukh, “Mindwave Device Wheelchair Control”, 2013.
  • B. Champaty, P. Dubey, S. Sahoo, S. S. Ray, K. Pal, A. Anis, “Development of wireless EMG control system for rehabilitation devices”, Annual International Conference on Emerging Research Areas: Magnetics, Machines and Drives (AICERA/iCMMD), 1–4, 2014.
  • M. A. Ahamed, M. A.-U. Ahad, M. H. A. Sohag, M. Ahmad, “Development of low cost wireless biosignal acquisition system for ECG EMG and EOG”, 2nd International Conference on Electrical Information and Communication Technologies (EICT), 195–199, 2015.
  • N. Mulayim, S. Ciklacandir, “Low-Cost Real-Time Electromyography (EMG) Data Acquisition Experimental Setup for Biomedical Technologies Education”, 7, 2017.
  • I. Jolliffe, Principal component analysis, Springer, 2011.
  • J. R. Quinlan, “Induction of decision trees”, Machine learning, 1(1), 81–106, 1986.
  • J. R. Quinlan, “Simplifying decision trees”, International journal of man-machine studies, 27(3), 221–234, 1987.
  • L. Breiman, “Random forests”, Machine learning, 45(1), 5–32, 2001.
  • A. Liaw, M. Wiener, “Classification and regression by randomForest”, R news, 2(3), 18–22,2002.
  • M. Akman, Y. Genç, H. Ankarali, “Random forests yöntemi ve sağlık alanında bir uygulama”, Turkiye Klinikleri Journal of Biostatistics, 3(1), 36–48, 2011.
  • N. S. Altman, “An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression”, The American Statistician, 46(3), 175–185, 1992.
  • B. Masand, G. Linoff, D. Waltz, “Classifying news stories using memory based reasoning”, 15th annual international ACM SIGIR conference on Research and development in information retrieval, 59–65, 1992.
  • E. Alpaydin, Introduction to machine learning. MIT press, 2009.

Labeling System with EEG, EMG, and IMU for Visual Training of Autonomous Vehicles

Yıl 2019, , 299 - 305, 29.10.2019
https://doi.org/10.17671/gazibtd.542662

Öz

Autonomous vehicles are tools that make decisions and take decisions by perceiving their environment. Today, autonomous vehicles are also used in traffic in some countries. Various types of cameras, laser radars (LIDAR), sonar distance sensors, etc. are used for environmental detection in autonomous vehicles. After the environment is perceived, the collected data is taught to the vehicle with the help of machine learning methods and the vehicle reaches the target by following the traffic rules. At the point of traffic rules, the biggest task belongs to image-based systems. However, ideal traffic conditions and environmental conditions are not always provided. It is important to identify situations that may present a danger to autonomous vehicles. When the literature is examined, no visual data set or a scientific study with dangerous labeling has been found. In this study, it is aimed to design a data collection and labeling system to overcome this gap in the literature. In the system designed for the purpose, a system which automatically creates a video label from the physiological data of the driver (EEG ve EMG) and the inertia change data during human driving is designed. For this reason, firstly, the sensor signals were collected by experiments. In the time and frequency field, attributes were extracted by using the non-overlapping sliding window with 0.33 sec length. The input variables in the data set were reduced by PCA and classified by DT, RF and K-NN algorithms. According to the preliminary study findings, the K-NN method was the most successful algorithm among the algorithms tested with 0.922 accuracy.

Kaynakça

  • B. Siciliano, O. Khatib, Springer Handbook of Robotics, Springer, 2016.
  • D. Nistér, O. Naroditsky, J. Bergen, “Visual Odometry for Ground Vehicle Applications”, Journal of Field Robotics, 23(1), 3–20, 2006.
  • M. O. Aqel, M. H. Marhaban, M. I. Saripan, N. B. Ismail, “Review of visual odometry: types, approaches, challenges, and applications”, SpringerPlus, 5(1), 1897, 2016.
  • H. Lum, J. A. Reagan, “Interactive highway safety design model: accident predictive module”, Public Roads, 58(3), 1995.
  • T. Bliss, Implementing the recommendations of the world report on road traffic injury prevention, 2004.
  • M. Peden et al., World report on road traffic injury prevention, World Health Organization Geneva, 2004.
  • N. J. Goodall, “Can you program ethics into a self-driving car?”, IEEE Spectrum, 53(6), 28–58, 2016.
  • D. Birnbacher, W. Birnbacher, “Fully autonomous driving: Where technology and ethics meet”, IEEE Intelligent Systems, 32(5), 3–4, 2017.
  • N. A. Greenblatt, “Self-driving cars and the law”, IEEE spectrum, 53(2), 46–51,2016.
  • J. Fleetwood, “Public health, ethics, and autonomous vehicles”, American journal of public health, 107(4), 532–537,2017.
  • K. Sadeghi, A. Banerjee, J. Sohankar, S. K. Gupta, “Safedrive: An autonomous driver safety application in aware cities”, Pervasive Computing and Communication Workshops (PerCom Workshops), 2016 IEEE International Conference on, 1–6, 2016.
  • C. Richter, N. Roy, “Safe visual navigation via deep learning and novelty detection”, 2017.
  • D. Gruyer, V. Magnier, K. Hamdi, L. Claussmann, O. Orfila, A. Rakotonirainy, “Perception, information processing and modeling: Critical stages for autonomous driving applications”, Annual Reviews in Control, 44, 323–341,2017.
  • N. Smolyanskiy, A. Kamenev, J. D. Smith, S. T. Birchfield, “Performing autonomous path navigation using deep neural networks”, Oct-2018.
  • L. Chi, Y. Mu, “Deep steering: Learning end-to-end driving model from spatial and temporal visual cues”, arXiv preprint arXiv:1708.03798, 2017.
  • A. Bemporad, A. De Luca, G. Oriolo, “Local incremental planning for a car-like robot navigating among obstacles”, Proceedings of IEEE International Conference on Robotics and Automation, 2, 1205–1211, 1996.
  • J. Liu, P. Jayakumar, J. L. Stein, T. Ersal, “A study on model fidelity for model predictive control-based obstacle avoidance in high-speed autonomous ground vehicles”, Vehicle System Dynamics, 54(11), 1629–1650,2016.
  • N. Nourani-Vatani, J. Roberts, M. V. Srinivasan, “Practical visual odometry for car-like vehicles”, in 2009 IEEE International Conference on Robotics and Automation, 3551–3557, 2009.
  • F. You, R. Zhang, G. Lie, H. Wang, H. Wen, J. Xu, “Trajectory planning and tracking control for autonomous lane change maneuver based on the cooperative vehicle infrastructure system”, Expert Systems with Applications, 42(14), 5932–5946,2015.
  • J. Sattar, J. Mo, “SafeDrive: A Robust Lane Tracking System for Autonomous and Assisted Driving Under Limited Visibility”, arXiv preprint arXiv:1701.08449, 2017.
  • M. Cordts et al., “The cityscapes dataset for semantic urban scene understanding”, Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223, 2016.
  • W. Maddern, G. Pascoe, C. Linegar, P. Newman, “1 year, 1000 km: The Oxford RobotCar dataset”, The International Journal of Robotics Research, 36(1), 3–15,2017.
  • F. Yu et al., “BDD100K: A diverse driving video database with scalable annotation tooling”, arXiv preprint arXiv:1805.04687, 2018.
  • A. Geiger, P. Lenz, R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite”, IEEE Conference on Computer Vision and Pattern Recognition, 3354–3361, 2012.
  • A. Geiger, P. Lenz, C. Stiller, R. Urtasun, “Vision meets robotics: The KITTI dataset”, The International Journal of Robotics Research, 32(11), 1231–1237,2013.
  • E. S. Watkins, “The physiology and pathology of formula one Grand Prix motor racing”, Clin Neurosurg, 53, 145–152, 2006.
  • M. Quigg, M. Quigg, EEG pearls, Mosby Elsevier, 2006.
  • Internet: “EEG Algorithms | NeuroSky” . http://neurosky.com/biosensors/eeg-sensor/algorithms/
  • A. Sezer, Y. İnel, A. Ç. Seçkin, U. Uluçınar, “An Investigation of University Students’ Attention Levels in Real Classroom Settings with NeuroSky’s MindWave Mobile (EEG) Device”, International Educational Technology Conference 2015, 88–101, 2015.
  • A. Sezer, Y. İnel, A. Ç. Seçkin, U. Uluçınar, “The Relationship between Attention Levels and Class Participation of First-Year Students in Classroom Teaching Departments”, International Journal of Instruction, 10(2), 55–68,2017.
  • F. Bozkurt, A. Ç. Seçkin, A. Coşkun, “Integration of IMU Sensor on Low-Cost EEG and Design of Cursor Control System with ANFIS”, International Journal of Engineering Trends and Technology, 54(3), 162–169,2017.
  • K. Patel, H. Shah, M. Dcosta, D. Shastri, “Evaluating NeuroSky’s Single-Channel EEG Sensor for Drowsiness Detection”, HCI International 2017 – Posters’ Extended Abstracts, 243–250, 2017.
  • P. D. Girase, M. P. Deshmukh, “Mindwave Device Wheelchair Control”, 2013.
  • B. Champaty, P. Dubey, S. Sahoo, S. S. Ray, K. Pal, A. Anis, “Development of wireless EMG control system for rehabilitation devices”, Annual International Conference on Emerging Research Areas: Magnetics, Machines and Drives (AICERA/iCMMD), 1–4, 2014.
  • M. A. Ahamed, M. A.-U. Ahad, M. H. A. Sohag, M. Ahmad, “Development of low cost wireless biosignal acquisition system for ECG EMG and EOG”, 2nd International Conference on Electrical Information and Communication Technologies (EICT), 195–199, 2015.
  • N. Mulayim, S. Ciklacandir, “Low-Cost Real-Time Electromyography (EMG) Data Acquisition Experimental Setup for Biomedical Technologies Education”, 7, 2017.
  • I. Jolliffe, Principal component analysis, Springer, 2011.
  • J. R. Quinlan, “Induction of decision trees”, Machine learning, 1(1), 81–106, 1986.
  • J. R. Quinlan, “Simplifying decision trees”, International journal of man-machine studies, 27(3), 221–234, 1987.
  • L. Breiman, “Random forests”, Machine learning, 45(1), 5–32, 2001.
  • A. Liaw, M. Wiener, “Classification and regression by randomForest”, R news, 2(3), 18–22,2002.
  • M. Akman, Y. Genç, H. Ankarali, “Random forests yöntemi ve sağlık alanında bir uygulama”, Turkiye Klinikleri Journal of Biostatistics, 3(1), 36–48, 2011.
  • N. S. Altman, “An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression”, The American Statistician, 46(3), 175–185, 1992.
  • B. Masand, G. Linoff, D. Waltz, “Classifying news stories using memory based reasoning”, 15th annual international ACM SIGIR conference on Research and development in information retrieval, 59–65, 1992.
  • E. Alpaydin, Introduction to machine learning. MIT press, 2009.
Toplam 45 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Yazılımı
Bölüm Makaleler
Yazarlar

Ahmet Çağdaş Seçkin 0000-0002-9849-3338

Yayımlanma Tarihi 29 Ekim 2019
Gönderilme Tarihi 21 Mart 2019
Yayımlandığı Sayı Yıl 2019

Kaynak Göster

APA Seçkin, A. Ç. (2019). Labeling System with EEG, EMG, and IMU for Visual Training of Autonomous Vehicles. Bilişim Teknolojileri Dergisi, 12(4), 299-305. https://doi.org/10.17671/gazibtd.542662