Abstract
Deep Learning algorithms are used by many different disciplines for various purposes, thanks to their ever-developing data processing skills. Convolutional neural network (CNN) are generally developed and used for this integration purpose. On the other hand, the widespread usage of Unmanned Aerial Vehicles (UAV) enables the collection of aerial photographs for Photogrammetric studies. In this study, these two fields were brought together and it was aimed to find the equivalents of the objects detected from the UAV images using deep learning in the global coordinate system and to evaluate their accuracy over these values. For these reasons, v3 and v4 versions of the YOLO algorithm, which prioritizes detecting the midpoint of the detected object, were trained in Google Colab’s virtual machine environment using the prepared data set. The coordinate values read from the orthophoto and the coordinate values of the midpoints of the objects, which were derived according to the estimations made by the YOLO-v3 and YOLOV4-CSP models, were compared and their spatial accuracy was calculated. Accuracy of 16.8 cm was obtained with the YOLO-v3 and 15.5 cm with the YOLOv4-CSP. In addition, the mAP value was obtained as 80% for YOLOv3 and 87% for YOLOv4-CSP. F1-score is 80% for YOLOv3 and 85% for YOLOv4-CSP.