Research Article
BibTex RIS Cite

Mobil Uygulama İle Derin Öğrenme Tabanlı Nesne Tespiti ve Büyük Dil Modeli İle İfade Üretme

Year 2026, Issue: Advanced Online Publication , 69 - 93
https://doi.org/10.56850/jnse.1828189
https://izlik.org/JA94FL65WP

Abstract

Bu çalışma, kullanıcıların çevrelerindeki nesneleri algılamalarını, bu nesnelerin uzaklıklarını ölçmelerini ve nesneler arasındaki konumsal ilişkileri anlamalarını sağlayan bütünleşik bir mobil çözüm sunmaktadır. Sistem, YOLOv11 tabanlı gerçek zamanlı nesne tespiti, LiDAR destekli mesafe ölçümü ve GPT-4o’nun ifade üretimini bir araya getirerek kullanıcının istediği nesneyi bulmasını ve nesnenin çevresindeki diğer nesneleri de öğrenmesini sağlamaktadır. Bu sayede kullanıcı yalnızca nesnelerin varlığını değil, aynı zamanda konumlarını ve birbirleriyle olan konumsal düzenlerini de öğrenebilmektedir. Çalışmada, nesne tespiti sırasında görüntüler mobil uygulama ile yakalanarak nesnenin her zaman görsel çerçeve içerisinde yer alması sağlanır. Bu, görme engelli kullanıcıların oluşturduğu fotoğraflarda sıklıkla karşılaşılan bulanıklık ve yanlış çerçeveleme gibi sorunların önüne geçer. Deneysel sonuçlar, YOLOv11 modelinin 0.77 F1 puanı ve 0.806 mAP değeri ile etkili bir performans ortaya koyduğunu göstermektedir. Ayrıca ince ayar gerçekleştirilen GPT-4o modeli, görüntülerdeki nesne konumlarını doğru biçimde belirleyerek nesneyi ve etrafındaki diğer nesneleri içeren ifadeler üretmektedir. Bu çalışma, nesne tespiti, LiDAR tabanlı mesafe ölçümü ve büyük bir dil modelinin ifade üretimini birleştiren bir sistem önermektedir. Gelecekte daha gelişmiş çözümlerin uygulanması için bir referans oluşturmaktadır.

References

  • Abed, A. A., Al-Ibadi, A., & Abed, I. A. (2023). Real-time multiple face mask and fever detection using YOLOv3 and TensorFlow lite platforms. Bulletin of Electrical Engineering and Informatics, 12(2), 922-929.
  • Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., & Anadkat, S. (2023). Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
  • Alamsyah, D. P., Ramdhani, Y., Syam, A. T., & Setiadi, A. (2022). Augmented Reality English Education Based iOS with MobileNetV2 Image Recognition Model. 2022 Seventh International Conference on Informatics and Computing (ICIC),
  • Alemdar, K. D., Kayacı Çodur, M., Codur, M. Y., & Uysal, F. (2023). Environmental Effects of Driver Distraction at Traffic Lights: Mobile Phone Use. Sustainability, 15(20), 15056.
  • Boyar, T., & Yıldız, K. (2022). Powdery mildew detection in hazelnut with deep learning. Hittite Journal of Science and Engineering, 9(3), 159-166.
  • Chen, C., Anjum, S., & Gurari, D. (2022). Grounding answers for visual questions asked by visually impaired people. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Chen, C., Tseng, Y.-Y., Li, Z., Venkatesh, A., & Gurari, D. (2025). Acknowledging Focus Ambiguity in Visual Questions. arXiv preprint arXiv:2501.02201.
  • Chen, J., & Zhu, Z. (2023). Real-time 3D object detection, recognition and presentation using a mobile device for assistive navigation. SN Computer Science, 4(5), 543. Furniture Computer Vision Dataset. (2022). Retrieved 19.11.2025 from https://universe.roboflow.com/objectdetection-uzld5/furniture-ngpea-h6zxi/
  • Gurari, D., Li, Q., Stangl, A. J., Guo, A., Lin, C., Grauman, K., Luo, J., & Bigham, J. P. (2018). Vizwiz grand challenge: Answering visual questions from blind people. Proceedings of the IEEE conference on computer vision and pattern recognition, Han, X., Zhang, Z., Ding, N., Gu, Y., Liu, X., Huo, Y., Qiu, J., Yao, Y., Zhang, A., & Zhang, L. (2021). Pre-trained models: Past, present and future. AI Open, 2, 225-250. He, L., Zhou, Y., Liu, L., Zhang, Y., & Ma, J. (2025). Application of the YOLOv11-seg algorithm for AI-based landslide detection and recognition. Scientific Reports, 15(1), 12421.
  • HomeObjects. (2025). Retrieved 19.11.2025 from https://app.roboflow.com/objectdetection-uzld5/homeobjects/4
  • Huh, M., Xu, F., Peng, Y.-H., Chen, C., Gurari, D., Choi, E., & Pavel, A. (2024). Long-form answers to visual questions from blind and low vision people. Workshop on Demographic Diversity in Computer Vision@ CVPR 2025,
  • Khoshsirat, S., & Kambhamettu, C. (2023). Embedding attention blocks for the vizwiz answer grounding challenge. VizWiz Grand Challenge Workshop,
  • Kotthapalli, M., Ravipati, D., & Bhatia, R. (2025). YOLOv1 to YOLOv11: A comprehensive survey of real-time object detection innovations and challenges. arXiv preprint arXiv:2508.02067.
  • Kumar, S., Ratan, R., & Desai, J. (2022). Cotton disease detection using tensorflow machine learning technique. Advances in Multimedia, 2022.
  • Liao, Y., Li, L., Xiao, H., Xu, F., Shan, B., & Yin, H. (2025). YOLO-MECD: citrus detection algorithm based on YOLOv11. Agronomy, 15(3), 687.
  • Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13,
  • Mahi, A. B. S., Eshita, F. S., & Helaly, T. (2023). An automated system for wrong-way vehicle detection using yolo and deepsort. 2023 5th International Conference on Sustainable Technologies for Industry 5.0 (STI).
  • Massiceti, D., Zintgraf, L., Bronskill, J., Theodorou, L., Harris, M. T., Cutrell, E., Morrison, C., Hofmann, K., & Stumpf, S. (2021). Orbit: A real-world few-shot dataset for teachable object recognition. Proceedings of the IEEE/CVF International Conference on Computer Vision.
  • Moreira, F. W. R., Hermes, G., & de Lima, J. M. M. (2024). Development of a Cross Platform Mobile Application Using Gemini to Assist Visually Impaired Individuals. 2024 9th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS). Morishita, M., Fukuda, H., Yamaguchi, S., Muraoka, K., Nakamura, T., Hayashi, M., Yoshioka, I., Ono, K., & Awano, S. (2024). An exploratory assessment of GPT-4o and GPT-4 performance on the Japanese National Dental Examination. The Saudi Dental Journal, 36(12), 1577-1581.
  • Open Neural Network Exchange. Retrieved 10.12.2025 from https://onnx.ai Prechelt, L. (2002). Early stopping-but when? In Neural Networks: Tricks of the trade (pp. 55-69). Springer.
  • Pudari, R., Bhutada, S., & Mudavath, S. P. (2020). Real Time Face Recognition Using Convoluted Neural Networks. arXiv preprint arXiv:2010.04517. Sujaini, H., Ramadhan, E. Y., & Novriando, H. (2021). Comparing the performance of linear regression versus deep learning on detecting melanoma skin cancer using apple core ML. Bulletin of Electrical Engineering and Informatics, 10(6), 3110-3120.
  • Tautkute, I., Możejko, A., Stokowiec, W., Trzciński, T., Brocki, Ł., & Marasek, K. (2017). What looks good with my sofa: Multimodal search engine for interior design. 2017 Federated Conference on Computer Science and Information Systems (FedCSIS).
  • Tinn, R., Cheng, H., Gu, Y., Usuyama, N., Liu, X., Naumann, T., Gao, J., & Poon, H. (2023). Fine-tuning large neural language models for biomedical natural language processing. Patterns, 4(4).
  • Wang, Z., Li, C., Xu, H., Zhu, X., & Li, H. (2025). Mamba YOLO: A Simple Baseline for Object Detection with State Space Model. Proceedings of the AAAI Conference on Artificial Intelligence.
  • Wehr, A., & Lohr, U. (1999). Airborne laser scanning—an introduction and overview. ISPRS Journal of photogrammetry and remote sensing, 54(2-3), 68-82.

DEEP LEARNING-BASED OBJECT DETECTION WITH MOBILE APPLICATION AND EXPRESSION GENERATION USING A LARGE LANGUAGE MODEL

Year 2026, Issue: Advanced Online Publication , 69 - 93
https://doi.org/10.56850/jnse.1828189
https://izlik.org/JA94FL65WP

Abstract

This work presents an integrated mobile solution that allows users to detect objects in their environment, measure their distances, and understand the spatial relationships between them. The system combines YOLOv11-based real-time object detection, LiDAR-assisted distance measurement, and GPT-4o expression generation, allowing users to locate desired objects and learn about nearby objects. This allows the user to understand not only the presence of objects but also their locations and their spatial relationships. In this study, images are captured with a mobile application during object detection, ensuring that the object is always within the frame. This prevents problems such as blurring and incorrect framing, which are frequently encountered in photos created by visually impaired users. Experimental results show that the YOLOv11 model demonstrates effective performance with an F1 score of 0.77 and a mAP value of 0.806. Furthermore, the fine-tuned GPT-4o model identifies object locations in images and generates expressions that include other surrounding objects. The present work proposes a system that integrates object detection, LiDAR-based distance measurement, and expression generation from a large language model. It provides a reference for the implementation of more advanced solutions in the future.

References

  • Abed, A. A., Al-Ibadi, A., & Abed, I. A. (2023). Real-time multiple face mask and fever detection using YOLOv3 and TensorFlow lite platforms. Bulletin of Electrical Engineering and Informatics, 12(2), 922-929.
  • Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., & Anadkat, S. (2023). Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
  • Alamsyah, D. P., Ramdhani, Y., Syam, A. T., & Setiadi, A. (2022). Augmented Reality English Education Based iOS with MobileNetV2 Image Recognition Model. 2022 Seventh International Conference on Informatics and Computing (ICIC),
  • Alemdar, K. D., Kayacı Çodur, M., Codur, M. Y., & Uysal, F. (2023). Environmental Effects of Driver Distraction at Traffic Lights: Mobile Phone Use. Sustainability, 15(20), 15056.
  • Boyar, T., & Yıldız, K. (2022). Powdery mildew detection in hazelnut with deep learning. Hittite Journal of Science and Engineering, 9(3), 159-166.
  • Chen, C., Anjum, S., & Gurari, D. (2022). Grounding answers for visual questions asked by visually impaired people. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Chen, C., Tseng, Y.-Y., Li, Z., Venkatesh, A., & Gurari, D. (2025). Acknowledging Focus Ambiguity in Visual Questions. arXiv preprint arXiv:2501.02201.
  • Chen, J., & Zhu, Z. (2023). Real-time 3D object detection, recognition and presentation using a mobile device for assistive navigation. SN Computer Science, 4(5), 543. Furniture Computer Vision Dataset. (2022). Retrieved 19.11.2025 from https://universe.roboflow.com/objectdetection-uzld5/furniture-ngpea-h6zxi/
  • Gurari, D., Li, Q., Stangl, A. J., Guo, A., Lin, C., Grauman, K., Luo, J., & Bigham, J. P. (2018). Vizwiz grand challenge: Answering visual questions from blind people. Proceedings of the IEEE conference on computer vision and pattern recognition, Han, X., Zhang, Z., Ding, N., Gu, Y., Liu, X., Huo, Y., Qiu, J., Yao, Y., Zhang, A., & Zhang, L. (2021). Pre-trained models: Past, present and future. AI Open, 2, 225-250. He, L., Zhou, Y., Liu, L., Zhang, Y., & Ma, J. (2025). Application of the YOLOv11-seg algorithm for AI-based landslide detection and recognition. Scientific Reports, 15(1), 12421.
  • HomeObjects. (2025). Retrieved 19.11.2025 from https://app.roboflow.com/objectdetection-uzld5/homeobjects/4
  • Huh, M., Xu, F., Peng, Y.-H., Chen, C., Gurari, D., Choi, E., & Pavel, A. (2024). Long-form answers to visual questions from blind and low vision people. Workshop on Demographic Diversity in Computer Vision@ CVPR 2025,
  • Khoshsirat, S., & Kambhamettu, C. (2023). Embedding attention blocks for the vizwiz answer grounding challenge. VizWiz Grand Challenge Workshop,
  • Kotthapalli, M., Ravipati, D., & Bhatia, R. (2025). YOLOv1 to YOLOv11: A comprehensive survey of real-time object detection innovations and challenges. arXiv preprint arXiv:2508.02067.
  • Kumar, S., Ratan, R., & Desai, J. (2022). Cotton disease detection using tensorflow machine learning technique. Advances in Multimedia, 2022.
  • Liao, Y., Li, L., Xiao, H., Xu, F., Shan, B., & Yin, H. (2025). YOLO-MECD: citrus detection algorithm based on YOLOv11. Agronomy, 15(3), 687.
  • Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13,
  • Mahi, A. B. S., Eshita, F. S., & Helaly, T. (2023). An automated system for wrong-way vehicle detection using yolo and deepsort. 2023 5th International Conference on Sustainable Technologies for Industry 5.0 (STI).
  • Massiceti, D., Zintgraf, L., Bronskill, J., Theodorou, L., Harris, M. T., Cutrell, E., Morrison, C., Hofmann, K., & Stumpf, S. (2021). Orbit: A real-world few-shot dataset for teachable object recognition. Proceedings of the IEEE/CVF International Conference on Computer Vision.
  • Moreira, F. W. R., Hermes, G., & de Lima, J. M. M. (2024). Development of a Cross Platform Mobile Application Using Gemini to Assist Visually Impaired Individuals. 2024 9th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS). Morishita, M., Fukuda, H., Yamaguchi, S., Muraoka, K., Nakamura, T., Hayashi, M., Yoshioka, I., Ono, K., & Awano, S. (2024). An exploratory assessment of GPT-4o and GPT-4 performance on the Japanese National Dental Examination. The Saudi Dental Journal, 36(12), 1577-1581.
  • Open Neural Network Exchange. Retrieved 10.12.2025 from https://onnx.ai Prechelt, L. (2002). Early stopping-but when? In Neural Networks: Tricks of the trade (pp. 55-69). Springer.
  • Pudari, R., Bhutada, S., & Mudavath, S. P. (2020). Real Time Face Recognition Using Convoluted Neural Networks. arXiv preprint arXiv:2010.04517. Sujaini, H., Ramadhan, E. Y., & Novriando, H. (2021). Comparing the performance of linear regression versus deep learning on detecting melanoma skin cancer using apple core ML. Bulletin of Electrical Engineering and Informatics, 10(6), 3110-3120.
  • Tautkute, I., Możejko, A., Stokowiec, W., Trzciński, T., Brocki, Ł., & Marasek, K. (2017). What looks good with my sofa: Multimodal search engine for interior design. 2017 Federated Conference on Computer Science and Information Systems (FedCSIS).
  • Tinn, R., Cheng, H., Gu, Y., Usuyama, N., Liu, X., Naumann, T., Gao, J., & Poon, H. (2023). Fine-tuning large neural language models for biomedical natural language processing. Patterns, 4(4).
  • Wang, Z., Li, C., Xu, H., Zhu, X., & Li, H. (2025). Mamba YOLO: A Simple Baseline for Object Detection with State Space Model. Proceedings of the AAAI Conference on Artificial Intelligence.
  • Wehr, A., & Lohr, U. (1999). Airborne laser scanning—an introduction and overview. ISPRS Journal of photogrammetry and remote sensing, 54(2-3), 68-82.
There are 24 citations in total.

Details

Primary Language English
Subjects Computer Vision, Natural Language Processing
Journal Section Research Article
Authors

Nurcihan Dere 0009-0009-6072-6990

Kazım Yıldız 0000-0001-6999-1410

Önder Demir 0000-0003-4540-663X

Submission Date November 21, 2025
Acceptance Date December 16, 2025
Early Pub Date April 8, 2026
DOI https://doi.org/10.56850/jnse.1828189
IZ https://izlik.org/JA94FL65WP
Published in Issue Year 2026 Issue: Advanced Online Publication

Cite

APA Dere, N., Yıldız, K., & Demir, Ö. (2026). DEEP LEARNING-BASED OBJECT DETECTION WITH MOBILE APPLICATION AND EXPRESSION GENERATION USING A LARGE LANGUAGE MODEL. Journal of Naval Sciences and Engineering, Advanced Online Publication, 69-93. https://doi.org/10.56850/jnse.1828189

Aim & Scope

The journal aims to provide a scientific contribution to the theory and applications of engineering fields, and share knowledge in relevant fields, stipulating an open-access policy.

Topics of interest include the technological and scientific aspects of the following areas:

  • Engineering research fields
  • Basic Sciences (To specialize in engineering/technical fields and be indexed in SCI-E in the future, we'll reduce Basic Sciences publications for audience focus.)


General: Manuscripts must be prepared in MS Word, single-spaced with justification. Font: Times New Roman, 11 points (changed on June 1st, 2023). Margins: left 4,5 cm- right 3,5 cm, top 5 cm- bottom 7 cm, header 3,25 cm- footer 6 cm, gutter 0. Paper type: A4. Page numbers should be in the middle of the bottom of the page with the -1-, -2-, -3- etc format. Using footnotes is not allowed. (a. Please click to reach the sample text format. b. Please use our checklist before submitting your paper.)

Ethics Committee Approval and/or Legal/Special Permission: The articles must state whether an ethical committee approval and/or legal/special permission is required or not. If these approvals are required, then it should be clearly presented from which institution, on what date, and with which decision or number these approvals are obtained. 

Body of Text: Follow this order when typing manuscripts: Title, Authors, Abstract, Keywords, Title (Turkish), Abstract (Turkish), Keywords (Turkish), Main Text, Appendix (if any), References.

Title: The title should reflect the objectives of the paper clearly, be easily understandable, and not exceed 15 words.

Abstracts: Each paper should have an abstract with 100-200 words and have a structured form, i.e. standard structure of an article (background, purpose, material, methods used, results, conclusion).

Paper Length: The manuscript should be a minimum of 2000 words or 5 pages, and a maximum of 7000 words or 25 pages including references.

Keywords: The author must provide some keywords (between 3 and 5) that will be used to classify the paper.

Unit: International System of Unit (Système Internationale d’Unités; SI) (https://www.britannica.com/science/International-System-of-Units) should be used for all scientific and laboratory data.

References and List of ReferencesReferences should be given according to APA format. Please click for examples.  (Note: This rule is effective as of November 2020 issue.)


Abbreviations and Acronyms: Standard abbreviations and acronyms should be used for each related discipline. Acronyms should be identified at the first occurrence in the text. Abbreviations and acronyms may also be attached to the main text as an appendix.

Equations and Formulas: Equations and formulas should be numbered consecutively. These numbers must be shown within parentheses being aligned to right. In the text, equations and formulas should be referred to with their numbers given in parentheses. Comprehensive formulas, not appropriate to be written in the texts, should be prepared in figures.

Figures and Tables: Figures and tables should be numbered consecutively. In the text referring to figures and tables should be made by typing “Figure 1” or “Table 1” etc. A suitable title should be assigned to each of them. If any figures appear in color, please note that they will only appear in color in the online version, but in the printed version they will be in black and white.

Journal of Naval Sciences and Engineering (JNSE) / Ethical Principles and Publication Policy:

Journal of Naval Sciences and Engineering (hereafter JNSE) is a peer-reviewed, international, inter-disciplinary journal in science and technology, which is published semi-annually since 2003. JNSE is committed to providing a platform where the highest standards of publication ethics are the key aspect of the editorial and peer-review processes.

The editorial process for a manuscript to the JNSE consists of a double-blind review, which means that both the reviewer and author identities are concealed from the reviewers, and vice versa, throughout the review process. If the manuscript is accepted in the review stage of the Editorial Process then, the submission goes through the editing stage, which consists of the processes of copyediting, language control, reference control, layout, and proofreading. Reviewed articles are treated confidentially in JNSE.

Papers submitted to JNSE are screened for plagiarism regarding the criteria specified on the Publishing Rules page with a plagiarism detection tool. In case the editors become aware of proven scientific misconduct, they can take the necessary steps. The editors have the right to retract an article whether submitted to JNSE or published in JNSE.

Following the completion of the editing stage, the manuscript is then scheduled for publication in an issue of the JNSE. The articles which are submitted to JNSE to be published are free of article submission, processing, and publication charges. The accepted papers are published free of charge online from the journal website and printed. The articles that are accepted to appear in the journal are made freely available to the public via the journal’s website. The journal is also being printed by National Defense University Turkish Naval Academy Press on demand. 

JNSE has editors and an editorial board that consists of academic members from at least five different universities. JNSE has an open access policy which means that all contents are freely available without charge to the user or his/her institution. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles, or use them for any other lawful research purposes.

Publication ethics of the JNSE are mainly based on the guidelines and recommendations which are published by the Committee on Publication Ethics (COPE), World Federation of Engineering Organizations (WFEO), Council of Science Editors (CSE), and Elsevier’s Publishing Ethics for Editors statements.

The duties and responsibilities of all parties in the publishing process including editors, authors, and others are defined below.

The Responsibilities of the Authors:

• Authors are responsible for the scientific, contextual, and linguistic aspects of the articles which are published in the journal. The views expressed or implied in this publication, unless otherwise noted, should not be interpreted as official positions of the Institution.
• Authors should follow the “Author Guidelines” on JNSE’s web page on DergiPark.
• Authors should conduct their research in an ethical and responsible manner and follow all relevant legislation.
• Authors should take collective responsibility for their work and for the content of their publications.
• Authors should check their publications carefully at all stages to ensure that methods and findings are reported accurately.
• Authors must represent the work of others accurately in citations, quotations, and references.
• Authors should carefully check calculations, data presentations, typescripts/submissions, and proofs.
• Authors should present their conclusions and results honestly and without fabrication,
falsification, or inappropriate data manipulation. Research images should not be modified in a misleading way.
• Authors should describe their methods to present their findings clearly and unambiguously.
• Authors accept that the publisher of JNSE holds and retains the copyright of the published articles.
• Authors are responsible to obtain permission to include images, figures, etc. to appear in the article.
• In multi-authored publications - unless otherwise stated - author rankings are made according to their contributions.
• Authors should alert the editor promptly if they discover an error in any submitted.
• Authors should follow the publication requirements regarding that the submitted work is original and has not been published elsewhere in any language.
• Authors should work with the editor or publisher to correct their work promptly if errors are discovered after publication.
• If the work involves chemicals, procedures, or equipment that have any unusual hazards inherent in their use, the authors must clearly identify these in the manuscript.
• If the work involves the use of animals or human participants, the authors should ensure that all procedures were performed in compliance with relevant laws and institutional guidelines and that the appropriate institutional committee(s) has approved them; the manuscript should contain a statement to this effect.
• Authors should also include a statement in the manuscript that informed consent was obtained for experimentation with human participants. Because the privacy rights of human participants must always be preserved. It is important that authors have an explicit statement explaining that informed consent has been obtained from human participants and that the participants’ rights have been observed.
• Authors have the responsibility of responding to the reviewers’ comments promptly and cooperatively, in a point-by-point manner.

The Responsibilities of the Reviewers:

• Peer review process has two fundamental purposes as follows: The first purpose is to decide whether the relevant article can be published in JNSE or not and the second purpose is to contribute to the improvement of the weaknesses of the related article before the publication.
• The peer review process for an article to the JNSE consists of a double-blind review, which means that both the reviewer and author identities are concealed from the reviewers, and vice versa, throughout the review process. Reviewed articles are treated confidentially in JNSE.
• Reviewers must respect the confidentiality of the peer review process.
• Reviewers must refrain from using the knowledge that they have obtained during the peer review process for their own or others’ interests.
• Reviewers should definitely be in contact with the JNSE if they suspect the identity of the author(s) during the review process and if they think that this knowledge may raise potential competition or conflict of interest.
• Reviewers should notify the JNSE in case of any suspicion regarding the potential competition or conflict of interest during the review process.
• Reviewers should accept to review the studies in which they have the required expertise to conduct an appropriate appraisal, they can comply with the confidentiality of the double-blind review system, and they can keep the details about the peer review process confidential.
• Reviewers should be in contact with the JNSE in order to demand some missing documents, following the examination of the article, supplementary files, and ancillary materials.
• Reviewers should act with the awareness that they are the most basic determinants of the academic quality of the articles to be published in the journal and they should review the article with the responsibility to increase academic quality.
• Reviewers should be in contact with the JNSE editors if they detect any irregularities with respect to the Publication Ethics and Responsibilities.
• Reviewers should review the articles within the time that has been allowed. If they can not review the article within a reasonable time frame, then they should notify the journal as soon as possible.
• Reviewers should report their opinions and suggestions in terms of acceptance/revision/rejection for the manuscript in the peer review process through the Referee Review Form which is provided by JNSE.
• In case of rejection, reviewers should demonstrate the deficient and defective issues about the manuscript in a clear and concrete manner in the provided Referee Review Form.
• Review reports should be prepared and submitted in accordance with the format and content of the Referee Review Form which is provided by JNSE.
• Review reports should be fair, objective, original and prudent manner.
• Review reports should contain constructive criticism and suggestions about the relevant article.

The Responsibilities of the Editors:

• Editors are responsible for enhancing the quality of the journal and supporting the authors in their effort to produce high-quality research. Under no conditions do they allow plagiarism or scientific misconduct.
• Editors ensure that all submissions go through a double-blind review and other editorial procedures. All submissions are subject to a double-blind peer-review process and an editorial decision based on objective judgment.
• Each submission is assessed by the editor for suitability in the JNSE and then, sent to at least two expert reviewers.
• Editors are responsible for seeking reviewers who do not have a conflict of interest with the authors. A double-blind review assists the editor in making editorial decisions.
• Editors ensure that all the submitted studies have passed the initial screening, plagiarism check, review, and editing. In case the editors become aware of alleged or proven scientific misconduct, they can take the necessary steps. The editors have the right to retract an article. The editors are willing to publish errata, retractions, or apologies when needed.

The articles submitted to JNSE to be published are free of article submission, processing and publication charges.

The accepted articles are published free-of-charge as online from the journal website and printed.

Baş Editör

Electronic and Magnetic Properties of Condensed Matter; Superconductivity, Photovoltaic Power Systems, Electronics, Semiconductors, Electronic, Optics and Magnetic Materials, Nanotechnology

Assistant Editor

Aerodynamics (Excl. Hypersonic Aerodynamics), Computational Methods in Fluid Flow, Heat and Mass Transfer (Incl. Computational Fluid Dynamics), Naval Architecture, Ship and Yacht Design, Ocean Engineering

Technical Editor

Curriculum Design Instructional Theories, Instructional Technologies, Development of Science, Technology and Engineering Education and Programs

Layout Editor & Secretariat

Tribology, Materials Engineering, Plating Technology, Corrosion, Material Characterization, Powder Metallurgy
Electrical Engineering (Other)

Editorial Board

Turbulent Flows, Naval Architecture, Ocean Engineering
Fuzzy Computation
Engineering, Maritime Engineering, Marine Main and Auxiliaries , Mechanical Engineering
Information and Computing Sciences, Machine Learning, Deep Learning, Artificial Intelligence
Statistical Experiment Design, Statistical Quality Control, Industrial Engineering, Optimization in Manufacturing
Tribology, Plating Technology, Internal Combustion Engines
Space, Maritime and Aviation Law, Naval Platforms Structural Design, Maritime Business Administration, Marine Technology, Marine Transportation, Maritime Transportation Engineering, Marine Structures, Marine Electronics, Control and Automation, Ship Manoeuvring and Control, Ship and Platform Structures (Incl. Maritime Hydrodynamics), Ship Management, Ship Energy Efficiency, Deck and Navigation Engineering, Ocean Engineering, Maritime Engineering (Other), Marine Geology and Geophysics , Maritime Transportation and Freight Services

Advisory Board

Marine Electronics, Control and Automation, Naval Architecture, Ocean Engineering
Ship Energy Efficiency, Energy, Wind Energy Systems, Renewable Energy Resources , Energy Efficiency
Electronics
Supervised Learning, Machine Learning (Other), Mathematical Optimisation, Industrial Engineering, Optimization in Manufacturing, Supply Chain Management