<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>turk. j. appl. geoinf. sci.</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Turkish Journal of Applied Geoinformation Sciences</journal-title>
            </journal-title-group>
                                        <issn pub-type="epub">3108-818X</issn>
                                                                                            <publisher>
                    <publisher-name>Mersin University</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id/>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Photogrammetry and Remote Sensing</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Fotogrametri ve Uzaktan Algılama</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                        <article-title>Deep Learning-Based Classification of UAV Orthophotos Using MIDNet Architecture</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0003-4388-6633</contrib-id>
                                                                <name>
                                    <surname>Aslan</surname>
                                    <given-names>İlyas</given-names>
                                </name>
                                                                    <aff>DICLE UNIVERSITY</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-6061-7796</contrib-id>
                                                                <name>
                                    <surname>Polat</surname>
                                    <given-names>Nizar</given-names>
                                </name>
                                                                    <aff>HARRAN UNIVERSITY</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20260330">
                    <day>03</day>
                    <month>30</month>
                    <year>2026</year>
                </pub-date>
                                        <volume>8</volume>
                                        <issue>1</issue>
                                        <fpage>15</fpage>
                                        <lpage>27</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20260208">
                        <day>02</day>
                        <month>08</month>
                        <year>2026</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20260317">
                        <day>03</day>
                        <month>17</month>
                        <year>2026</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2026, Turkish Journal of Applied Geoinformation Sciences</copyright-statement>
                    <copyright-year>2026</copyright-year>
                    <copyright-holder>Turkish Journal of Applied Geoinformation Sciences</copyright-holder>
                </permissions>
            
                                                                                                <abstract><p>Photogrammetric methods have advanced significantly, enabling progress in cartography, construction, agriculture, and natural disaster monitoring. The integration of Structure from Motion (SfM) and orthophoto mapping has facilitated the generation of high-resolution, error-corrected images for various geospatial analyses. However, traditional deep learning-based Convolutional Neural Networks (CNNs) for orthophoto classification face challenges such as high computational costs, limited multiscale feature extraction, and suboptimal accuracy in complex landscapes. To address these limitations, this study introduces Multiscale Inception Depthwise Network (MIDNet), a novel CNN-based architecture designed for efficient and precise classification of UAV-derived high-resolution orthophotos. MIDNet leverages inception modules for multiscale feature extraction and depthwise separable convolutions to enhance computational efficiency without sacrificing performance. Experimental validation conducted on the generated reference dataset demonstrates that MIDNet outperforms the compared deep learning models, achieving an overall accuracy of 96.97%, an average accuracy of 95.96% and a kappa coefficient of 96.29%, surpassing DenseNet121 (OA: 96.32%, AA: 95.47%, Kappa: 95.50%) and InceptionV3 (OA: 96.60%, AA: 94.05%, Kappa: 95.85%), while maintaining the smallest model size (4.05 MB) and fastest testing time (8 seconds). These results underscore MIDNet’s superior classification accuracy, lightweight design, and suitability for resource-constrained environments, making it a compelling advancement in orthophoto classification techniques.</p></abstract>
                                                            
            
                                                            <kwd-group>
                                                    <kwd>Orthophoto</kwd>
                                                    <kwd>  unmanned aerial vehicles</kwd>
                                                    <kwd>  multiscale classification</kwd>
                                                    <kwd>  inception module</kwd>
                                                    <kwd>  depthwise separable convolution</kwd>
                                            </kwd-group>
                            
                                                                                                                        </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">Agarap, A. F. (2018). Deep Learning using Rectified Linear Units (ReLU). 1, 2–8. http://arxiv.org/abs/1803.08375</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">Aslan, İ., &amp; Polat, N. (2024). Deep learning-based classification of mature and immature lavender plants using UAV orthophotos and a hybrid CNN approach. Earth Science Informatics, 17(2), 1713–1727. https://doi.org/10.1007/s12145-023-01200-7</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">Buyukdemircioglu, M., Can, R., Kocaman, S., &amp; Kada, M. (2022). Deep Learning Based Building Footprint Extraction From Very High Resolution True Orthophotos and Ndsm. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 5(2), 211–218. https://doi.org/10.5194/isprs-annals-V-2-2022-211-2022</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">Carrio, A., Sampedro, C., Rodriguez-Ramos, A., &amp; Campoy, P. (2017). A review of deep learning methods and applications for unmanned aerial vehicles. Journal of Sensors, 2017. https://doi.org/10.1155/2017/3296874</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">Chen, H., Engkvist, O., Wang, Y., Olivecrona, M., &amp; Blaschke, T. (2018). The rise of deep learning in drug discovery. Drug Discovery Today, 23(6), 1241–1250. https://doi.org/10.1016/j.drudis.2018.01.039</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">Ekmekji, A. (Stanford U. (2016). Convolutional Neural Networks for Age and Gender Classificatio. Research Paper.</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">Fırat, H., &amp; Hanbay, D. (2023). Comparison of 3D CNN based deep learning architectures using hyperspectral images. Journal of the Faculty of Engineering and Architecture of Gazi University, 38(1), 521–534. https://doi.org/10.17341/gazimmfd.977688</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">Firat, H., Çiğ, H., Güllüoğlu, M. T., Asker, M. E., &amp; Hanbay, D. (2023). Multiscale Feature Fusion for Hyperspectral Image Classification Using Hybrid 3D-2D Depthwise Separable Convolution Networks. Traitement Du Signal, 40(5), 1921–1939. https://doi.org/10.18280/ts.400512</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">He, K., Zhang, X., Ren, S., &amp; Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-Decem, 770–778. https://doi.org/10.1109/CVPR.2016.90</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., &amp; Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. http://arxiv.org/abs/1704.04861</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">Huang, G., Liu, Z., Van Der Maaten, L., &amp; Weinberger, K. Q. (2017). Densely connected convolutional networks. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-Janua, 2261–2269. https://doi.org/10.1109/CVPR.2017.243</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">Işik, G., &amp; Artuner, H. (2016). Radyo Sinyallerinin Derin Öǧrenme Sinir Aǧlari ile Taninmasi. 2016 24th Signal Processing and Communication Application Conference, SIU 2016 - Proceedings, 837–840. https://doi.org/10.1109/SIU.2016.7495870</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">Jaud, M., Grasso, F., Le Dantec, N., Verney, R., Delacourt, C., Ammann, J., Deloffre, J., &amp; Grandjean, P. (2016). Potential of UAVs for monitoring mudflat morphodynamics (Application to the Sein e Estuary, France). ISPRS International Journal of Geo-Information, 5(4). https://doi.org/10.3390/ijgi5040050</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">Krichen, M. (2023). Convolutional Neural Networks: A Survey. Computers, 12(8), 1–41. https://doi.org/10.3390/computers12080151</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">Krizhevsky, A. (2014). One weird trick for parallelizing convolutional neural networks. http://arxiv.org/abs/1404.5997</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">Manakitsa, N., Maraslidis, G. S., Moysis, L., &amp; Fragulis, G. F. (2024). A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision. Technologies, 12(2). https://doi.org/10.3390/technologies12020015</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">Mittal, P., Singh, R., &amp; Sharma, A. (2020). Deep learning-based object detection in low-altitude UAV datasets: A survey. Image and Vision Computing, 104, 104046. https://doi.org/10.1016/j.imavis.2020.104046</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">Osco, L. P., Marcato Junior, J., Marques Ramos, A. P., de Castro Jorge, L. A., Fatholahi, S. N., de Andrade Silva, J., Matsubara, E. T., Pistori, H., Gonçalves, W. N., &amp; Li, J. (2021). A review on deep learning in UAV remote sensing. International Journal of Applied Earth Observation and Geoinformation, 102. https://doi.org/10.1016/j.jag.2021.102456</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">Park, J., Cho, Y. K., &amp; Kim, S. (2022). Deep learning-based UAV image segmentation and inpainting for generating vehicle-free orthomosaic. International Journal of Applied Earth Observation and Geoinformation, 115(November), 103111. https://doi.org/10.1016/j.jag.2022.103111</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">Qiu, Z., Bai, H., &amp; Chen, T. (2023). Special Vehicle Detection from UAV Perspective via YOLO-GNS Based Deep Learning Network. Drones, 7(2). https://doi.org/10.3390/drones7020117</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">Radovic, M., Adarkwa, O., &amp; Wang, Q. (2017). Object recognition in aerial images using convolutional neural networks. Journal of Imaging, 3(2). https://doi.org/10.3390/jimaging3020021</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">Rakshit, H., &amp; Bagheri Zadeh, P. (2024). A New Approach to Classify Drones Using a Deep Convolutional Neural Network. Drones, 8(7). https://doi.org/10.3390/drones8070319</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">Safonova, A., Tabik, S., Alcaraz-Segura, D., Rubtsov, A., Maglinets, Y., &amp; Herrera, F. (2019). Detection of fir trees (Abies sibirica) damaged by the bark beetle in unmanned aerial vehicle images with deep learning. Remote Sensing, 11(6). https://doi.org/10.3390/rs11060643</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">Saxena, A. (2022). An Introduction to Convolutional Neural Networks. International Journal for Research in Applied Science and Engineering Technology, 10(12), 943–947. https://doi.org/10.22214/ijraset.2022.47789</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">Simonyan, K., &amp; Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 1–14.</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., &amp; Wojna, Z. (2016). Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-Decem, 2818–2826. https://doi.org/10.1109/CVPR.2016.308</mixed-citation>
                    </ref>
                                    <ref id="ref27">
                        <label>27</label>
                        <mixed-citation publication-type="journal">Taye, M. M. (2023). Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions. Computation, 11(3). https://doi.org/10.3390/computation11030052</mixed-citation>
                    </ref>
                                    <ref id="ref28">
                        <label>28</label>
                        <mixed-citation publication-type="journal">Vargas, Rocio, Mosavi, Amir, &amp; Ruiz, R. (2017). Deep Learning : a Review Deep Learning : a Review. Advances in Intelligent Systems and Computing, July.</mixed-citation>
                    </ref>
                                    <ref id="ref29">
                        <label>29</label>
                        <mixed-citation publication-type="journal">Yılmaz, H. M., Mutluoğlu, Ö., Ulvi, A., Yaman, A., &amp; Bilgilioğlu, S. S. (2018). İnsansız Hava Aracı ile Ortofoto Üretimi ve Aksaray Üniversitesi Kampüsü Örneği Created Tree Dimensional Model of Aksaray University Campus With Unmanned Aerial Vehicle. 3(2), 129–136.</mixed-citation>
                    </ref>
                                    <ref id="ref30">
                        <label>30</label>
                        <mixed-citation publication-type="journal">Zhao, X., Wang, L., Zhang, Y., Han, X., Deveci, M., &amp; Parmar, M. (2024). A review of convolutional neural networks in computer vision. In Artificial Intelligence Review (Vol. 57, Issue 4). Springer Netherlands. https://doi.org/10.1007/s10462-024-10721-6</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
