Enhanced Crop Row Detection Techniques for Autonomous Agricultural Vehicle: A Comparative Analysis of Classical and Deep Learning Approaches
Öz
As the global population increases, precision agriculture plays a key role in sustainable farming. This study compares three crop row detection methods to help with autonomous navigation in agriculture: classical image processing, CNN-based plant detection, and CNN-based crop row segmentation. Each method was tested in a controlled Gazebo simulation for accuracy, speed, mean angular error, and adaptability. The CNN-based crop row segmentation method (YOLOv11n-seg) had the highest accuracy (99.5%) and was less affected by environmental changes, but it was slower, with an average speed below 5 FPS. Classical image processing was the fastest (average 95.82 FPS), but it was less reliable because it was sensitive to camera angle and color changes. CNN-based plant detection, especially YOLOv11n, provided a good balance of high accuracy (98.79%), real-time speed (32.3 FPS on Jetson Orin Nano), and robustness, and it performed better than MobileNetV2 (94.62%, 21.49 FPS). The study also used mean angular error to measure navigation stability. CNN-based methods, especially YOLOv11n, had lower angular errors (±1.40°) and were more stable than classical methods (±9.82°), which led to more reliable simulated results. Other recent studies have also compared the efficiency of these methods. The findings highlight a trade-off between real-time performance and accuracy. Field trials are planned to test these results in real-world conditions.
Anahtar Kelimeler
Crop row detection, Autonomous navigation, Deep learning, Mean angular error
Destekleyen Kurum
Proje Numarası
Etik Beyan
Teşekkür
Kaynakça
- Ahmadi, A., Nardi, L., Chebrolu, N., & Stachniss, C. (2020). Visual servoing-based navigation for monitoring row-crop fields. 2020 IEEE International Conference on Robotics and Automation (ICRA), 4920–4926. https://doi.org/10.1109/ICRA40945.2020.9197114
- Alqahtani, D. K., Cheema, M. A., & Toosi, A. N. (2025). Benchmarking deep learning models for object detection on edge computing devices. In W. Gaaloul, Q. Yu, M. Sheng, & S. Yangui (Eds.), Service-Oriented Computing: ICSOC 2024 (Lecture Notes in Computer Science, Vol. 15404, pp. 142–150). Springer. https://doi.org/10.1007/978-981-96-0805-8_11
- Bah, M. D., Hafiane, A., & Canals, R. (2020). CRowNet: Deep network for crop row detection in UAV images. IEEE Access, 8, 5189–5200. https://doi.org/10.1109/ACCESS.2019.2960873
- Balakirsky, S., & Kootbally, Z. (2012). USARSim/ROS: A combined framework for robotic control and simulation. Proceedings of the ASME/ISCIE 2012 International Symposium on Flexible Automation (ISFA 2012), 101–108. https://doi.org/10.1115/ISFA2012-7179
- Bengochea-Guevara, J. M., Conesa-Muñoz, J., Andújar, D., & Ribeiro, A. (2016). Merge fuzzy visual servoing and GPS-based planning to obtain a proper navigation behavior for a small crop-inspection robot. Sensors, 16(3), Article 276. https://doi.org/10.3390/s16030276
- Bonadies, S., & Gadsden, S. A. (2019). An overview of autonomous crop row navigation strategies for unmanned ground vehicles. Engineering in Agriculture, Environment and Food, 12(1), 24–31. https://doi.org/10.1016/j.eaef.2018.10.002
- Cheppally, R. H., & Sharda, A. (2025). RowDetr: End-to-end crop row detection using polynomials. Smart Agricultural Technology, 7, Article 101494. https://arxiv.org/abs/2412.10525
- De Silva, R., Cielniak, G., & Gao, J. (2024). Vision-based crop row navigation under varying field conditions in arable fields. Computers and Electronics in Agriculture, 217, Article 108581. https://doi.org/10.1016/j.compag.2023.108581
- De Silva, R., Cielniak, G., Wang, G., & Gao, J. (2023). Deep learning-based crop row detection for infield navigation of agri-robots. Journal of Field Robotics, 40(8), 2299–2321. https://doi.org/10.1002/rob.22238
- Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–255. https://doi.org/10.1109/CVPR.2009.5206848