In terms of situational awareness, object recognition, and real-time decision-making, abstract camera-based image detection methods have grown to be a core element of autonomous driving systems. This study presents a comprehensive evaluation of camera-based object detection techniques used in autonomous driving systems. Traditional methods such as Haar Cascades and HOG are reviewed alongside modern deep learning architectures including CNN, YOLO, and GANs. The study examines their strengths, weaknesses, and real-time performance across various detection tasks such as 2D/3D object detection, semantic/instance segmentation, and behavioral prediction. Especially promising for improving perceptual dependability under demanding environmental conditions and sensor fusion techniques combining data from lidar, radar, and cameras. By forecasting pedestrian and vehicle movements, deep learning-based behavioral prediction systems also greatly help to enable safer and more proactive driving. The results show that application-specific needs including accuracy, computational efficiency, and real-time processing should direct the choice of the suitable object identification technique. The findings suggest that no single technique is sufficient on its own; rather, the fusion of multiple systems, supported by adaptive and resource-efficient architectures, is crucial for safe and reliable autonomous driving. The research highlights the need for modular and scalable perception solutions capable of adapting to real-world complexities. Future studies should concentrate on the creation of low-cost, adaptive, multi-modal perception systems, which are fundamental for the safe and broad implementation of autonomous driving technology.
| Primary Language | English |
|---|---|
| Subjects | Autonomous Vehicle Systems |
| Journal Section | Articles |
| Authors | |
| Publication Date | June 29, 2025 |
| Submission Date | June 4, 2025 |
| Acceptance Date | June 17, 2025 |
| Published in Issue | Year 2025 Volume: 9 Issue: 2 |