<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>tuje</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Turkish Journal of Engineering</journal-title>
            </journal-title-group>
                                        <issn pub-type="epub">2587-1366</issn>
                                                                                            <publisher>
                    <publisher-name>Murat YAKAR</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.31127/tuje.1529660</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Computer Software</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Bilgisayar Yazılımı</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                                                            <article-title>Ship Detection from Optical Satellite Images Using Convolutional Neural Networks</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0009-0004-2142-6497</contrib-id>
                                                                <name>
                                    <surname>Toprak</surname>
                                    <given-names>Neslihan</given-names>
                                </name>
                                                                    <aff>PIRI REIS UNIVERSITY</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-2313-4525</contrib-id>
                                                                <name>
                                    <surname>Yalman</surname>
                                    <given-names>Yıldıray</given-names>
                                </name>
                                                                    <aff>PIRI REIS UNIVERSITY</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20250630">
                    <day>06</day>
                    <month>30</month>
                    <year>2025</year>
                </pub-date>
                                        <volume>9</volume>
                                        <issue>2</issue>
                                        <fpage>342</fpage>
                                        <lpage>353</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20240807">
                        <day>08</day>
                        <month>07</month>
                        <year>2024</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20240916">
                        <day>09</day>
                        <month>16</month>
                        <year>2024</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2017, Turkish Journal of Engineering</copyright-statement>
                    <copyright-year>2017</copyright-year>
                    <copyright-holder>Turkish Journal of Engineering</copyright-holder>
                </permissions>
            
                                                                                                                        <abstract><p>Since most of the world is covered with oceans and seas, seas and oceans have aroused people&#039;s curiosity throughout history. Humans have used oceans and seas in versatile ways. The seas are critical areas for trade, transportation, fishing, tourism, energy resources, border security, defense, and intelligence operations. Today, the increasing use of maritime routes creates problems in terms of maritime security, maritime traffic, and management. It has become necessary to look for alternatives to solve such problems in the maritime industry, and deep learning techniques have been used to solve these problems. This paper presents ship detection method from optical satellite images using convolutional neural networks. The motivation of this paper is to produce solutions to the issues of detecting possible dangers in areas with heavy maritime traffic, preventing illegal fishing, preventing pirate attacks, human smuggling, country defense, security and tracking of maritime trade routes with ship detection systems. The convolutional neural network models used in the paper are based on YOLOv8 and YOLOv9 and include different packages of these models. The dataset used in the paper was created using the FGSCR-42 dataset. The dataset used in the paper includes 12 classes. The performance of the model results was compared, and the results are presented in this paper. The mAP50 value of our YOLOv8l model, which we use as a new approach to ship detection studies in the literature, is 98.9%. Compared to similar studies in the literature, our model obtains a higher mAP value.</p></abstract>
                                                            
            
                                                                                        <kwd-group>
                                                    <kwd>Convolutional Neural Networks</kwd>
                                                    <kwd>  Ship Detection</kwd>
                                                    <kwd>  Optical Satellite Image</kwd>
                                                    <kwd>  YOLOv8</kwd>
                                                    <kwd>  YOLOv9</kwd>
                                            </kwd-group>
                            
                                                                                                                                                    </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">Marine Traffic. (n.d.). Live map. Retrieved August 7, 2024, from
https://help.marinetraffic.com/hc/en-us/articles/204062548-Live-Map</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">IMEAK. (2023). Maritime sector report Istanbul 2023. Istanbul &amp;amp; Marmara, Aegean, Mediterranean,
Black Sea Regions Chamber of Shipping.</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">Kayaalp, K., &amp;amp; Süzen, A. A. (2018). Derin öğrenme ve Türkiye’deki uygulamaları. IKSAD International
Publishing House.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">Fukushima, K. N. (1980). A self-organizing neural network model for a mechanism of pattern
recognition unaffected by shift in position. Biological Cybernetics, 36(4), 193–202.
https://doi.org/10.1007/BF00344251</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">Hubel, D. H., &amp;amp; Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate
cortex. The Journal of Physiology, 195(1), 215–243. https://doi.org/10.1113/jphysiol.1968.sp008455</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">Le Cun, Y., Bottou, L., Bengio, Y., &amp;amp; Haffner, P. (1998). Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11), 2278–2324. https://doi.org/10.1109/5.726791</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">LeCun, Y., Bengio, Y., &amp;amp; Hinton, G. (2015). Deep learning. Nature, 521, 436–444.
https://doi.org/10.1038/nature14539</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">Aydın, V. A. (2024). Comparison of CNN-based methods for yoga pose classification. Turkish Journal
of Engineering, 8(1), 65–75. https://doi.org/10.31127/tuje.1275826</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">Cireşan, D., Meier, U., &amp;amp; Schmidhuber, J. (2012). Multi-column deep neural networks for image
classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
3642–2649. https://doi.org/10.48550/arXiv.1202.2745</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">Cireşan, D., Meier, U., Masci, J., &amp;amp; Gambardella, L. M. (2012). Flexible high-performance
convolutional neural networks for image classification. Proceedings of the 22nd International Joint
Conference on Artificial Intelligence, 1237–1242. https://doi.org/10.5591/978-1-57735-516-
8/IJCAI11-210</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">Othman, M. M. (2023). Modeling of daily groundwater level using deep learning neural networks.
Turkish Journal of Engineering, 7(4), 331–337. https://doi.org/10.31127/tuje.1169908</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">Meghraoui, K., Sebari, I., Bensiali, S., &amp;amp; Ait El Kadi, K. (2022). On behalf of an intelligent approach
based on 3D CNN and multimodal remote sensing data for precise crop yield estimation: Case study
of wheat in Morocco. Advanced Engineering Science, 2, 118–126.</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">Çubukçu, E. A., Demir, V., &amp;amp; Sevimli, M. F. (2023). Carbon Monoxide forecasting with artificial neural
networks for Konya (Case Study of Meram). Engineering Applications, 2(1), 69–74.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">Jain, S., Rustagi, A., Saurav, S., Saini, R., &amp;amp; Singh, S. (2021). Three-dimensional CNN-inspired deep
learning architecture for Yoga pose recognition in the real-world environment. Neural Computing
and Applications, 33, 6427–6441. https://doi.org/10.1007/s00521-020-05405-5</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">Singh, A. P., Singh, M., Bhatia, K., &amp;amp; Pathak, H. (2024). Encrypted malware detection methodology
without decryption using deep learning-based approaches. Turkish Journal of Engineering, 8(3),
498–509. https://doi.org/10.31127/tuje.1416933</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">Grefenstette, E., Blunsom, P., Freitas, N. de, &amp;amp; Hermann, K. M. (2014). A deep architecture for
semantic parsing. https://doi.org/10.48550/arXiv.1404.7296</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">Kalchbrenner, N., Grefenstette, E., &amp;amp; Blunsom, P. (2014). A convolutional neural network for
modelling sentences. Proceedings of the 52nd Annual Meeting of the Association for Computational
Linguistics (ACL). https://doi.org/10.3115/v1/P14-1062</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">Kim, Y. (2014). Convolutional neural networks for sentence classification. Proceedings of the 2014
Conference on Empirical Methods in Natural Language Processing (EMNLP), 1746–1751.
https://doi.org/10.3115/v1/D14-1181</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">Kayıran, H. F. (2022). The function of artificial intelligence and its sub-branches in the field of health.
Engineering Applications, 1(2), 99–107. Retrieved September 14, 2024, from https://publish.mersin.e
du.tr/index.php/enap/article/view/328</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">Pajaziti, A., Basholli, F., &amp;amp; Zhaveli, Y. (2023). Identification and classification of fruits through robotic
system by using artificial intelligence. Engineering Applications, 2(2), 154–163. Retrieved September
14, 2024, from https://publish.mersin. edu.tr/index.php/enap/article/view/974</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">Ertuğrul, Özgür L., &amp;amp; İnal, F. (2022). Assessment of the artificial fiber contribution on the shear
strength parameters of soils. Advanced Engineering Science, 2, 93–100. Retrieved September 14,
2024, from https://publish.mersin.edu.tr/index.php/ades/article/view/172</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">Meghraoui, K., Sebari, I., Bensiali, S., &amp;amp; Ait El Kadi, K. (2022). On behalf of an intelligent approach
based on 3D CNN and multimodal remote sensing data for precise crop yield estimation: Case study
of wheat in Morocco. Advanced Engineering Science, 2, 118–126. Retrieved September 14, 2024,
from https://publish.mersin.edu.tr/index.php/ades/article/view/329</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">Naumov, A., Khmarskiy, P., Byshnev, N., &amp;amp; Piatrouski, M. (2023). Methods and software for
estimation of total electron content in ionosphere using GNSS observations. Engineering
Applications, 2(3), 243–253. Retrieved September 14, 2024, from
https://publish.mersin.edu.tr/index.php/enap/article/view/1165</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">Mirbakhsh, A., Lee, J., Jagirdar, R., Kim, H., &amp;amp; Besenski, D. (2023). Collective assessments of active
traffic management strategies in an extensive microsimulation testbed. Engineering Applications,
2(2), 146–153. Retrieved September 14, 2024, from
https://publish.mersin.edu.tr/index.php/enap/article/view/929</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">Mema, B., Basholli, F., &amp;amp; Hyka, D. (2024). Learning transformation and virtual interaction through
ChatGPT in Albanian higher education. Advanced Engineering Science, 4, 130–140. Retrieved
September 14, 2024, from https://publish.mersin .edu.tr/index.php/ades/article/view/1509</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">Yüksek, G., Muratoğlu, Y., &amp;amp; Alkaya, A. (2022). Modelling of supercapacitor by using parameter
estimation method for energy storage system. Advanced Engineering Science, 2, 67–73. Retrieved
September 14, 2024, from https://publish.mersin .edu.tr/index.php/ades/article/view/98</mixed-citation>
                    </ref>
                                    <ref id="ref27">
                        <label>27</label>
                        <mixed-citation publication-type="journal">Kaya, Y., Şenol, H.İ., Yiğit, A.Y., &amp;amp; Yakar, M. (2023). Car detection from very high-resolution UAV
images using deep learning algorithms. Photogrammetric Engineering &amp;amp; Remote Sensing, 89(2), 117-
123. https://doi.org/10.14358/PERS.22-00101R2</mixed-citation>
                    </ref>
                                    <ref id="ref28">
                        <label>28</label>
                        <mixed-citation publication-type="journal">Akar, Ö, Saralioğlu, E., Güngör, O., &amp;amp; Bayata, H. F. (2024). Semantic segmentation of very-high spatial
resolution satellite images: A comparative analysis of 3D-CNN and traditional machine learning
algorithms for automatic vineyard detection. International Journal of Engineering and Geosciences,
9(1), 12-24. https://doi.org/10.26833/ijeg.1252298</mixed-citation>
                    </ref>
                                    <ref id="ref29">
                        <label>29</label>
                        <mixed-citation publication-type="journal">Mahdavifard, M., Ahangar, S. K., Feizizadeh, B., Kamran, K. V., &amp;amp; Karimzadeh, S. (2023). Spatio-Temporal monitoring of Qeshm mangrove forests through machine learning classification of SAR and optical images on Google Earth Engine. International Journal of Engineering and Geosciences, 8(3),239-250. https://doi.org/10.26833/ijeg.1118542</mixed-citation>
                    </ref>
                                    <ref id="ref30">
                        <label>30</label>
                        <mixed-citation publication-type="journal">Şenol, H. İ., Kaya, Y., Yiğit, A. Y., &amp; Yakar, M. (2024). Extraction and geospatial analysis of the Hersek Lagoon shoreline with Sentinel-2 satellite data. Survey Review, 56(397), 367-382.</mixed-citation>
                    </ref>
                                    <ref id="ref31">
                        <label>31</label>
                        <mixed-citation publication-type="journal">Demirgül, T., Demir, V., &amp;amp; Sevimli, M. F. (2024). Farklı makine öğrenmesi yaklaşımları ile Türkiye&#039;nin solar radyasyon tahmini. Geomatik, 9(1), 106-122. https://doi.org/10.29128/geomatik.1374383</mixed-citation>
                    </ref>
                                    <ref id="ref32">
                        <label>32</label>
                        <mixed-citation publication-type="journal">Hazer, A., Bozdağ, A., &amp;amp; Atasever, Ü. H. (2024). Hiper-optimize edilmiş makine öğrenim teknikleri ile
taşınmaz değerlemesi, Yozgat kenti örneği. Geomatik, 9 (3), 299-312.
https://doi.org/10.29128/geomatik.1454915</mixed-citation>
                    </ref>
                                    <ref id="ref33">
                        <label>33</label>
                        <mixed-citation publication-type="journal">Günen, M. A., &amp;amp; Beşdok, E. (2023). Effect of denoising methods for hyperspectral images
classification: DnCNN, NGM, CSF, BM3D and Wiener. Mersin Photogrammetry Journal, 5(1), 1-9.
https://doi.org/10.53093/mephoj.1213166</mixed-citation>
                    </ref>
                                    <ref id="ref34">
                        <label>34</label>
                        <mixed-citation publication-type="journal">Demirel, Y., &amp;amp; Türk, N. (2024). Automatic detection of active fires and burnt areas in forest areas
using optical satellite imagery and deep learning methods. Mersin Photogrammetry Journal, 6(2), 66-
78. https://doi.org/10.53093/mephoj.1575877</mixed-citation>
                    </ref>
                                    <ref id="ref35">
                        <label>35</label>
                        <mixed-citation publication-type="journal">Gharechelou, S., Tateishi, R., Sri Sumantyo, J. T., &amp;amp; Johnson, B. A. (2021). Soil moisture retrieval using
polarimetric SAR data and experimental observations in an arid environment. ISPRS International
Journal of Geo-Information, 10(10), 711. https://doi.org/10.3390/ijgi1010071</mixed-citation>
                    </ref>
                                    <ref id="ref36">
                        <label>36</label>
                        <mixed-citation publication-type="journal">Sakshaug, S. E. H. (2013). Evaluation of polarimetric SAR decomposition methods for tropical forest
analysis. University of Tromsø.</mixed-citation>
                    </ref>
                                    <ref id="ref37">
                        <label>37</label>
                        <mixed-citation publication-type="journal">Wang, B., Han, B., &amp;amp; Yang, L. (2021). Accurate real-time ship target detection using YOLOv4. 2021 6th
International Conference on Transportation Information and Safety (ICTIS), 222–227.
https://doi.org/10.1109/ICTIS54573.2021.9798495</mixed-citation>
                    </ref>
                                    <ref id="ref38">
                        <label>38</label>
                        <mixed-citation publication-type="journal">Hong, Z. H., et al. (2021). Multi-scale ship detection from SAR and optical imagery via a more
accurate YOLOv3. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
14, 6083–6101. https://doi.org/10.1109/JSTARS.2021.3087555</mixed-citation>
                    </ref>
                                    <ref id="ref39">
                        <label>39</label>
                        <mixed-citation publication-type="journal">Si, J., Song, B., Wu, J., Lin, W., Huang, W., &amp;amp; Chen, S. (2023). Maritime ship detection method for
satellite images based on multiscale feature fusion. IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, 16, 6642–6655. https://doi.org/10.1109/JSTARS.2023.3296898</mixed-citation>
                    </ref>
                                    <ref id="ref40">
                        <label>40</label>
                        <mixed-citation publication-type="journal">Di, Y., Jiang, Z., &amp;amp; Zhang, H. (2021). A public dataset for fine-grained ship classification in optical
remote sensing images. Remote Sensing, 13(4), 747. https://doi.org/10.3390/rs13040747</mixed-citation>
                    </ref>
                                    <ref id="ref41">
                        <label>41</label>
                        <mixed-citation publication-type="journal">Simonyan, K., &amp;amp; Zisserman, A. (2014). Very deep convolutional networks for large-scale image
recognition. International Conference on Learning Representations (ICLR).
https://doi.org/10.48550/arXiv.1409.1556</mixed-citation>
                    </ref>
                                    <ref id="ref42">
                        <label>42</label>
                        <mixed-citation publication-type="journal">He, K., Zhang, X., Ren, S., &amp;amp; Sun, J. (2016). Deep residual learning for image recognition. Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778.</mixed-citation>
                    </ref>
                                    <ref id="ref43">
                        <label>43</label>
                        <mixed-citation publication-type="journal">Xie, S., Girshick, R., Dollár, P., Tu, Z., &amp;amp; He, K. (2017). Aggregated residual transformations for deep
neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 5987–5995. https://doi.org/10.1109/CVPR.2017.634</mixed-citation>
                    </ref>
                                    <ref id="ref44">
                        <label>44</label>
                        <mixed-citation publication-type="journal">Huang, G., Liu, Z., Van Der Maaten, L., &amp;amp; Weinberger, K. Q. (2017). Densely connected convolutional
networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2261–2269. https://doi.org/10.1109/CVPR.2017.243</mixed-citation>
                    </ref>
                                    <ref id="ref45">
                        <label>45</label>
                        <mixed-citation publication-type="journal">Lin, T. Y., Chowdhury, A. R., &amp;amp; Maji, S. (2015). Bilinear CNN models for fine-grained visual recognition.
Proceedings of the International Conference on Computer Vision (ICCV).</mixed-citation>
                    </ref>
                                    <ref id="ref46">
                        <label>46</label>
                        <mixed-citation publication-type="journal">Fu, J., Zheng, H., &amp;amp; Mei, T. (2017). Look closer to see better: Recurrent attention convolutional neural
network for fine-grained image recognition. Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), 4476–4484. https://doi.org/10.1109/CVPR.2017.476</mixed-citation>
                    </ref>
                                    <ref id="ref47">
                        <label>47</label>
                        <mixed-citation publication-type="journal">Chen, Y., Bai, Y., Zhang, W., &amp;amp; Mei, T. (2019). Destruction and construction learning for fine-grained
image recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 5152–5161. https://doi.org/10.1109/CVPR.2019.00530</mixed-citation>
                    </ref>
                                    <ref id="ref48">
                        <label>48</label>
                        <mixed-citation publication-type="journal">Redmon, J., Divvala, S., Girshick, R., &amp;amp; Farhadi, A. (2016). You only look once: Unified, real-time object
detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
779–788. https://doi.org/10.1109/CVPR.2016.91</mixed-citation>
                    </ref>
                                    <ref id="ref49">
                        <label>49</label>
                        <mixed-citation publication-type="journal">GitHub. Ultralytics. Retrieved August 7, 2024, from
https://github.com/ultralytics/ultralytics?ref=blog.roboflow.com</mixed-citation>
                    </ref>
                                    <ref id="ref50">
                        <label>50</label>
                        <mixed-citation publication-type="journal">Wang, C.-Y., Yeh, I.-H., &amp;amp; Hong-Yuan, M. L. (2024). YOLOv9: Learning what you want to learn using
programmable gradient information. https://doi.org/10.48550/arXiv.2402.13616</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
