Research Article
BibTex RIS Cite

Lightweight Deep Learning Architectures for Ophthalmic Disease Detection: MobileNet Variants Applied to Glaucoma Classification

Year 2025, Volume: 21 Issue: 2, 123 - 135, 21.12.2025

Abstract

Glaucoma is a critical ophthalmological disease affecting millions of people worldwide, leading to irreversible optic nerve damage and permanent vision loss if diagnosed late. The disease often presents with no obvious clinical findings in its early stages, coupled with a high reliance on expert judgment, leading to time-consuming and misclassification risks in current diagnostic processes. This makes the integration of artificial intelligence (AI)-based automated systems into clinical decision support processes a crucial requirement for early and accurate glaucoma detection. This study proposes a hybrid approach based on the integrated use of lightweight deep learning (DL) architectures and machine learning (ML)-based classifiers for the automatic classification of glaucoma from fundus images. The proposed hybrid architecture utilizes MobileNetv1, MobileNetv2, and MobileNetv3 (small and large) architectures as the core components of the model. The MobileNet family was chosen due to its low parameter count, high computational efficiency, suitability for real-time operation in mobile and embedded systems, and its ability to effectively extract meaningful deep features from fundus images. Furthermore, in this study, high-dimensional deep feature vectors were extracted from fundus images using these pre-trained models. These features were then processed with different ML classifiers, such as Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), Lightweight Gradient Boosting Machine (LGBM), Categorical Gradient Boosting (CatBoost), and k-Nearest Neighbor (kNN), creating a comprehensive hybrid classification framework that combines the strengths of both DL and traditional ML methods. Classification metrics such as accuracy, precision, recall, F-score, and area under curve (AUC) were used for performance evaluation. Experimental findings indicate that the MobileNetv2+SVM hybrid model, in particular, exhibits significant superiority. This hybrid model achieved the highest performance levels in the study, with 0.9409 accuracy, 0.9229 recall, 0.9221 F-score, and 0.9229 AUC. However, the highest precision value (0.9264) was obtained with the MobileNetv3(small)+LGBM hybrid model, demonstrating that different MobileNet variants can provide strong discrimination performance when effectively integrated with various classifiers. The results demonstrate that MobileNet-based deep feature extraction offers high discrimination in glaucoma classification and that the proposed hybrid approach is a reliable, fast, and computationally efficient solution suitable for use in clinical decision support systems. This study provides an important foundation for the development of low-cost and real-time early glaucoma detection systems.

References

  • [1] M. Ashtari-Majlan, M. M. Dehshibi, and D. Masip, “Glaucoma diagnosis in the era of deep learning: A survey,” Expert Systems With Applications, vol. 256, Art. no. 124888, Aug. 2024, doi: 10.1016/j.eswa.2024.124888.
  • [2] S. K. Sharma, D. Muduli, R. Priyadarshini, R. R. Kumar, A. Kumar, and J. Pradhan, “An evolutionary supply chain management service model based on deep learning features for automated glaucoma detection using fundus images,” Engineering Applications of Artificial Intelligence, vol. 128, Art. no. 107449, Nov. 2023, doi: 10.1016/j.engappai.2023.107449.
  • [3] A. Karimi, A. Stanik, C. Kozitza, and A. Chen, “Integrating deep learning with electronic health records for early glaucoma detection: A multi-dimensional machine learning approach,” Bioengineering, vol. 11, no. 6, Art. no. 577, Jun. 2024, doi: 10.3390/bioengineering11060577.
  • [4] X. C. Ling et al., “Deep learning in glaucoma detection and progression prediction: A systematic review and meta-analysis,” Biomedicines, vol. 13, no. 2, Art. no. 420, Feb. 2025, doi: 10.3390/biomedicines13020420.
  • [5] H. Elmannai et al., “An improved deep learning framework for automated optic disc localization and glaucoma detection,” Computer Modeling in Engineering & Sciences, vol. 140, no. 2, pp. 1429–1458, May 2024, doi: 10.32604/cmes.2024.048557.
  • [6] S. Islam et al., “Novel deep learning model for glaucoma detection using fusion of fundus and optical coherence tomography images,” Sensors, vol. 25, no. 14, Art. no. 4337, Jul. 2025, doi: 10.3390/s25144337.
  • [7] N. A. Alkhaldi and R. E. Alabdulathim, “Optimizing glaucoma diagnosis with deep learning-based segmentation and classification of retinal images,” Applied Sciences, vol. 14, no. 17, Art. no. 7795, Sep. 2024, doi: 10.3390/app14177795.
  • [8] V. K. Velpula et al., “Glaucoma detection with explainable AI using convolutional neural networks based feature extraction and machine learning classifiers,” IET Image Processing, vol. 18, pp. 3827–3853, 2024, doi: 10.1049/ipr2.13211.
  • [9] S. M. Saqib et al., “Cataract and glaucoma detection based on transfer learning using MobileNet,” Heliyon, vol. 10, Art. no. e36759, Aug. 2024, doi: 10.1016/j.heliyon.2024.e36759.
  • [10] V. Guntreddi and S. V. Sivakumar, “Deep learning based glaucoma detection using majority voting ensemble of ResNet50, VGG16, and Swin Transformer,” Results in Engineering, vol. 28, Art. no. 107229, Sep. 2025, doi: 10.1016/j.rineng.2025.107229.
  • [11] B. P. Pradeep Kumar, P. K. B. Rangaiah, and R. Augustine, “Enhanced glaucoma detection using U-Net and U-Net+ architectures using deep learning techniques,” Photodiagnosis and Photodynamic Therapy, vol. 54, Art. no. 104621, Jun. 2025, doi: 10.1016/j.pdpdt.2025.104621.
  • [12] S. Yi and L. Zhou, “Multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning,” Medical & Biological Engineering & Computing, vol. 63, pp. 1–13, Aug. 2024, doi: 10.1007/s11517-024-03172-2.
  • [13] T. Shyamalee, D. Meedeniya, G. Lim, and M. Karunarathne, “Automated tool support for glaucoma identification with explainability using fundus images,” IEEE Access, vol. 12, pp. 17290–17310, 2024, doi: 10.1109/ACCESS.2024.3359698.
  • [14] A. Geetha, M. C. Sobia, D. Santhi, and A. Ahilan, “Deep GD: Deep learning based snapshot ensemble CNN with EfficientNet for glaucoma detection,” Biomedical Signal Processing and Control, vol. 100, Art. no. 106989, Oct. 2024, doi: 10.1016/j.bspc.2024.106989.
  • [15] D. Muduli et al., “Cloud-based optimized deep learning framework for automated glaucoma detection using stationary wavelet transform and improved grey-wolf-optimization with ELM approach,” Results in Engineering, vol. 26, Art. no. 104682, Apr. 2025, doi: 10.1016/j.rineng.2025.104682.
  • [16] Y.-Y. Chiang, C.-L. Chen, and Y.-H. Chen, “Deep learning evaluation of glaucoma detection using fundus photographs in highly myopic populations,” Biomedicines, vol. 12, no. 7, Art. no. 1394, Jun. 2024, doi: 10.3390/biomedicines12071394.
  • [17] D. Meedeniya, T. Shyamalee, G. Lim, and P. Yogarajah, “Glaucoma identification with retinal fundus images using deep learning: A systematic review,” Informatics in Medicine Unlocked, vol. 56, Art. no. 101644, May 2025, doi: 10.1016/j.imu.2025.101644.
  • [18] N. Afreen and R. Aluvalu, “Glaucoma detection using explainable AI and deep learning,” EAI Endorsed Transactions on Pervasive Health and Technology, vol. 10, 2024, doi: 10.4108/eetpht.10.5658.
  • [19] A. Aljohani and R. Y. Aburasain, “A hybrid framework for glaucoma detection through federated machine learning and deep learning models,” BMC Medical Informatics and Decision Making, vol. 24, Art. no. 115, 2024, doi: 10.1186/s12911-024-02518-y.
  • [20] Dataset: https://www.kaggle.com/datasets/aryaadithyan/glaucoma. Accessed: Dec. 11, 2025.
  • [21] A. Wibowo, A. A. Nugroho, and E. M. S. Sari, “Lightweight encoder–decoder model for automatic skin lesion segmentation using MobileNetV3-UNet,” Informatics in Medicine Unlocked, vol. 25, pp. 100681, 2021, doi: 10.1016/j.imu.2021.100681.
  • [22] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint, Art. no. arXiv:1704.04861, 2017, doi: 10.48550/arXiv.1704.04861.
  • [23] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4510–4520, doi: 10.1109/CVPR.2018.00474.
  • [24] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, and Q. V. Le, “Searching for MobileNetV3,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 1314–1324, doi: 10.1109/ICCV.2019.00140.

Göz Hastalıklarının Tespitinde Hafif Derin Öğrenme Mimarileri: Glokom Sınıflandırmasına Uygulanan MobileNet Varyantları

Year 2025, Volume: 21 Issue: 2, 123 - 135, 21.12.2025

Abstract

Glokom, dünya genelinde milyonlarca insanı etkileyen ve geç fark edildiğinde geri dönüşü olmayan optik sinir hasarı ve kalıcı görme kaybına yol açan kritik bir oftalmolojik hastalıktır. Hastalığın erken evrelerinde genellikle belirgin bir klinik bulgu ile ortaya çıkmaması, uzman değerlendirmelerine olan yüksek bağımlılık, mevcut tanı süreçlerinde zaman kaybına ve yanlış sınıflandırma riskine neden olabilmektedir. Bu durum, glokomun erken ve doğru tespiti için yapay zekâ (AI) tabanlı otomatik sistemlerin klinik karar destek süreçlerine entegrasyonunu önemli bir gereklilik hâline getirmektedir. Bu çalışmada, glokomun fundus görüntülerinden otomatik olarak sınıflandırılmasına yönelik hafif derin öğrenme (DL) mimarileri ve makine öğrenmesi (ML) tabanlı sınıflandırıcıların bütünleşik kullanımına dayanan hibrit bir yaklaşım önerilmektedir. Önerilen hibrit yapıda, modelin temel bileşeni olarak MobileNetv1, MobileNetv2 ve MobileNetv3 (small ve large) mimarilerinden yararlanılmıştır. MobileNet ailesinin tercih edilme nedeni; düşük parametre sayısı, yüksek hesaplama verimliliği, mobil ve gömülü sistemlerde gerçek zamanlı çalışmaya uygunluğu ve fundus görüntülerinden anlamlı derin özellikleri etkili şekilde çıkarabilme kapasitesidir. Ayrıca çalışmada, söz konusu önceden eğitilmiş modeller kullanılarak fundus görüntülerinden yüksek boyutlu derin özellik vektörleri elde edilmiştir. Bu özellikler daha sonra aşırı gradyan artırma (XGBoost), destek vektör makinesi (SVM), hafif gradyan artırma makinesi (LGBM), kategorik gradyan artırma (CatBoost) ve k-en yakın komşu (kNN) gibi farklı ML sınıflandırıcıları ile işlenmiş ve böylece hem DL hem de geleneksel ML yöntemlerinin güçlü yönlerini birleştiren kapsamlı bir hibrit sınıflandırma yapısı oluşturulmuştur. Performans değerlendirmesi için doğruluk, kesinlik, duyarlılık, F-skor ve eğri altında kalan alan (AUC) gibi temel sınıflandırma metrikleri kullanılmıştır. Deneysel bulgular, özellikle MobileNetv2+SVM hibrit modelinin dikkate değer bir üstünlük sergilediğini göstermektedir. Bu model, 0.9409 doğruluk, 0.9229 duyarlılık, 0.9221 F-skor ve 0.9229 AUC olmak üzere çalışma genelindeki en yüksek performans seviyelerini elde etmiştir. Bununla birlikte, en yüksek kesinlik değerinin (0.9264) MobileNetv3-small+LGBM modeli ile elde edilmiş olması, farklı MobileNet varyantlarının çeşitli sınıflandırıcılarla etkili şekilde bütünleştirildiğinde güçlü bir ayrım performansı sağlayabildiğini ortaya koymaktadır. Elde edilen sonuçlar, MobileNet tabanlı derin özellik çıkarımının glokom sınıflandırmasında yüksek ayırt edicilik sunduğunu ve önerilen hibrit yaklaşımın klinik karar destek sistemlerinde kullanılabilecek güvenilir, hızlı ve hesaplama açısından verimli bir çözüm olduğunu göstermektedir. Bu çalışma, özellikle düşük maliyetli ve gerçek zamanlı erken glokom tanı sistemlerinin geliştirilmesi için önemli bir temel oluşturmaktadır.

References

  • [1] M. Ashtari-Majlan, M. M. Dehshibi, and D. Masip, “Glaucoma diagnosis in the era of deep learning: A survey,” Expert Systems With Applications, vol. 256, Art. no. 124888, Aug. 2024, doi: 10.1016/j.eswa.2024.124888.
  • [2] S. K. Sharma, D. Muduli, R. Priyadarshini, R. R. Kumar, A. Kumar, and J. Pradhan, “An evolutionary supply chain management service model based on deep learning features for automated glaucoma detection using fundus images,” Engineering Applications of Artificial Intelligence, vol. 128, Art. no. 107449, Nov. 2023, doi: 10.1016/j.engappai.2023.107449.
  • [3] A. Karimi, A. Stanik, C. Kozitza, and A. Chen, “Integrating deep learning with electronic health records for early glaucoma detection: A multi-dimensional machine learning approach,” Bioengineering, vol. 11, no. 6, Art. no. 577, Jun. 2024, doi: 10.3390/bioengineering11060577.
  • [4] X. C. Ling et al., “Deep learning in glaucoma detection and progression prediction: A systematic review and meta-analysis,” Biomedicines, vol. 13, no. 2, Art. no. 420, Feb. 2025, doi: 10.3390/biomedicines13020420.
  • [5] H. Elmannai et al., “An improved deep learning framework for automated optic disc localization and glaucoma detection,” Computer Modeling in Engineering & Sciences, vol. 140, no. 2, pp. 1429–1458, May 2024, doi: 10.32604/cmes.2024.048557.
  • [6] S. Islam et al., “Novel deep learning model for glaucoma detection using fusion of fundus and optical coherence tomography images,” Sensors, vol. 25, no. 14, Art. no. 4337, Jul. 2025, doi: 10.3390/s25144337.
  • [7] N. A. Alkhaldi and R. E. Alabdulathim, “Optimizing glaucoma diagnosis with deep learning-based segmentation and classification of retinal images,” Applied Sciences, vol. 14, no. 17, Art. no. 7795, Sep. 2024, doi: 10.3390/app14177795.
  • [8] V. K. Velpula et al., “Glaucoma detection with explainable AI using convolutional neural networks based feature extraction and machine learning classifiers,” IET Image Processing, vol. 18, pp. 3827–3853, 2024, doi: 10.1049/ipr2.13211.
  • [9] S. M. Saqib et al., “Cataract and glaucoma detection based on transfer learning using MobileNet,” Heliyon, vol. 10, Art. no. e36759, Aug. 2024, doi: 10.1016/j.heliyon.2024.e36759.
  • [10] V. Guntreddi and S. V. Sivakumar, “Deep learning based glaucoma detection using majority voting ensemble of ResNet50, VGG16, and Swin Transformer,” Results in Engineering, vol. 28, Art. no. 107229, Sep. 2025, doi: 10.1016/j.rineng.2025.107229.
  • [11] B. P. Pradeep Kumar, P. K. B. Rangaiah, and R. Augustine, “Enhanced glaucoma detection using U-Net and U-Net+ architectures using deep learning techniques,” Photodiagnosis and Photodynamic Therapy, vol. 54, Art. no. 104621, Jun. 2025, doi: 10.1016/j.pdpdt.2025.104621.
  • [12] S. Yi and L. Zhou, “Multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning,” Medical & Biological Engineering & Computing, vol. 63, pp. 1–13, Aug. 2024, doi: 10.1007/s11517-024-03172-2.
  • [13] T. Shyamalee, D. Meedeniya, G. Lim, and M. Karunarathne, “Automated tool support for glaucoma identification with explainability using fundus images,” IEEE Access, vol. 12, pp. 17290–17310, 2024, doi: 10.1109/ACCESS.2024.3359698.
  • [14] A. Geetha, M. C. Sobia, D. Santhi, and A. Ahilan, “Deep GD: Deep learning based snapshot ensemble CNN with EfficientNet for glaucoma detection,” Biomedical Signal Processing and Control, vol. 100, Art. no. 106989, Oct. 2024, doi: 10.1016/j.bspc.2024.106989.
  • [15] D. Muduli et al., “Cloud-based optimized deep learning framework for automated glaucoma detection using stationary wavelet transform and improved grey-wolf-optimization with ELM approach,” Results in Engineering, vol. 26, Art. no. 104682, Apr. 2025, doi: 10.1016/j.rineng.2025.104682.
  • [16] Y.-Y. Chiang, C.-L. Chen, and Y.-H. Chen, “Deep learning evaluation of glaucoma detection using fundus photographs in highly myopic populations,” Biomedicines, vol. 12, no. 7, Art. no. 1394, Jun. 2024, doi: 10.3390/biomedicines12071394.
  • [17] D. Meedeniya, T. Shyamalee, G. Lim, and P. Yogarajah, “Glaucoma identification with retinal fundus images using deep learning: A systematic review,” Informatics in Medicine Unlocked, vol. 56, Art. no. 101644, May 2025, doi: 10.1016/j.imu.2025.101644.
  • [18] N. Afreen and R. Aluvalu, “Glaucoma detection using explainable AI and deep learning,” EAI Endorsed Transactions on Pervasive Health and Technology, vol. 10, 2024, doi: 10.4108/eetpht.10.5658.
  • [19] A. Aljohani and R. Y. Aburasain, “A hybrid framework for glaucoma detection through federated machine learning and deep learning models,” BMC Medical Informatics and Decision Making, vol. 24, Art. no. 115, 2024, doi: 10.1186/s12911-024-02518-y.
  • [20] Dataset: https://www.kaggle.com/datasets/aryaadithyan/glaucoma. Accessed: Dec. 11, 2025.
  • [21] A. Wibowo, A. A. Nugroho, and E. M. S. Sari, “Lightweight encoder–decoder model for automatic skin lesion segmentation using MobileNetV3-UNet,” Informatics in Medicine Unlocked, vol. 25, pp. 100681, 2021, doi: 10.1016/j.imu.2021.100681.
  • [22] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint, Art. no. arXiv:1704.04861, 2017, doi: 10.48550/arXiv.1704.04861.
  • [23] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4510–4520, doi: 10.1109/CVPR.2018.00474.
  • [24] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, and Q. V. Le, “Searching for MobileNetV3,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 1314–1324, doi: 10.1109/ICCV.2019.00140.
There are 24 citations in total.

Details

Primary Language English
Subjects Image and Video Coding
Journal Section Research Article
Authors

Cem Baydogan 0000-0002-6125-2442

Submission Date December 13, 2025
Acceptance Date December 21, 2025
Early Pub Date December 21, 2025
Publication Date December 21, 2025
Published in Issue Year 2025 Volume: 21 Issue: 2

Cite

APA Baydogan, C. (2025). Lightweight Deep Learning Architectures for Ophthalmic Disease Detection: MobileNet Variants Applied to Glaucoma Classification. Electronic Letters on Science and Engineering, 21(2), 123-135.

Aim & Scope

The International Journal "Electronic Letters on Science&Engineering"(e-LSE) reportrs the original research in science and engineering at a high level in electronic form. The scope of e-LSE spans the whole range of science and engineering. The e-LSE includes interdisciplinary topics in a variety of application fields. Electronically published since 2005, e-LSE provides rapid publication of topical research into the integration of science and engineering techniques.

The International Journal "Electronic Letters on Science&Engineering"(e-LSE) reportrs the original research in science and engineering at a high level in electronic form. The scope of e-LSE spans the whole range of science and engineering. The e-LSE includes interdisciplinary topics in a variety of application fields. Electronically published since 2005, e-LSE provides rapid publication of topical research into the integration of science and engineering techniques.

Ethics in Publishing

e-LSE pays attention to ethics in publishing in all levels. All articles submitted to the journal should be prepared by considering the internationally recognized ethical guildelines. Author(s) can get more information on publishing ethics from Committee on Publication Ethichs (COPE) website (https://publicationethics.org).


All submitted manuscripts are checked for plagiarism using iThenticate plagiarism detection system. Manuscripts with a high similarity rate will not be considered for review and publication.

17912
Templates

Template of the manuscript can be downloaded from the following link:

Manuscript Template (.docx)

Copyright form of the manuscript can be downloaded from the following link:

Copyright Form


Preparation of Manuscript
a) Author(s) should use these specific styles for each part (for example title, abstract, keyword etc.) of the manuscript. The detailed information about these styles can be found in the template file.

b) Author(s) should provide both Turkish and English versions of the title, abstract and keywords.

c) The full name(s) of the author(s) should be given. In addition the e-mail address(es), affiliation(s), and ORCID's of all author’s should be provided. The telephone number of the corresponding author should be written.

d) The citations should be given in IEEE Style. Authors can get help from citation management applications/tools when preparing their papers. The title of the citations section should be “References”. In text citations should be writeen in square brackets like [1], [3-5] etc.

Ethical Principles

Electronics Letters on Science and Engineering (e-LSE) pays attention to ethics in publishing in all levels. All articles submitted to the journal should be prepared by considering the internationally recognized ethical guidelines. Author(s) can get more information on publishing ethics from Committee on Publication Ethics (COPE) website (https://publicationethics.org). To attestation the unity of the published papers, e-LSE editors are handled for using COPE’s flowcharts on condition that they suspect an ethical matter about the paper they process.

Authors are boosted to submit novel and high quality works that have not been accepted or published by other journals. It is encouraged the best standards of publication ethics and it is taken all possible precaution against publication carelessness. It is important to acknowledge on standards of appropriate ethical behavior for all parties related in the act of publishing: authors, editors, reviewers and the publisher. The journal publisher takes its tasks of custody over the all stages of publishing seriously and recognize ethical and other engagements.

All submitted manuscripts are checked for plagiarism using iThenticate plagiarism detection system. Manuscripts with a high similarity rate will not be considered for review and publication.
The authors, reviewers and editors of the journal are expected to strictly committed to ensuring the policy of publication ethics and malpractice, and observance to the following statements:

Authors' Responsibilities

e-LSE is a peer-reviewed journal, and authors are obliged to participate in our single-blind peer review process. Unethical publishing behaviors such as plagiarism and self-plagiarism (There are many forms of plagiarism like copying or substantially paraphrasing any other work, and claiming results from papers published by others.) are unacceptable for in the e-LSE. All forms of plagiarism are intolerable and the work will be rejected during evaluation process*. At least corresponding author have to sign and attach "e-LSE Copyright Form" while submission process.
Each submitted work will be pre-reviewed by a member of "Editorial Assistants - Secretary" for their formal body, formatting in consistency with the "Author Guidelines", including in mind correspondence to the journal sections and topics, etc. When a paper is prepared in a proper way the article will be sent to the section editor for single blind peer reviewing by at least two independent reviewers. The peer reviewing of the work is mandatory and authors must admit this rule.

At least two reviewers comments and recommendations will be sent to the author(s) Following the the review process completion by the section editor. During the revision process, the author is required to submit the files containing the tracked changes, the final clean version of the manuscript and responses to the comments of the reviewers. It is the responsibility of the corresponding author to assurance that the work has been confirmed by all the other authors.

Co-authors should have remarkable support in the work. Any other persons who have been attended in a project and/or a research as a collaborators should be listed as contributors in the acknowledgement.

When the author detects a major error or mistake in the submitted work or previously published paper, he/she is obliged to promptly notify the editor or publisher. After this stage, the author should either withdraw the paper or try to edit the paper in cooperation with the publisher / editor. Three different corrections can be made in e-LSE. These are erratum, addendum, corrigendum note. If the editor or publisher determines the work published contains significant errors by a third party, the author must either correct the article quickly or prove the accuracy of the article and provide evidence. If the author does not fulfill this obligation, the article will be withdrawn.

Before submitting any work to the e-LSE the authors must ensure that;

i)   the work is his/her/their own original work and does not infringe the copyright or other rights,
ii)  the work or any version with minor revision of it has not previously been published or submitted for publication elsewhere,
iii) the work must not be under evaluation for any other publication as it is being evaluated by the e-LSE,
iv) all the data in the work is high scientific and technical standard,
v)  potential conflicts of interest should be remarked at the earliest stage possible (should be written to the editor note in the submission stage),
vi) if the work is about clinical and experimental studies on human or animal subjects, it should be attached an ethical report (Ethics Committee Approval) while submission. this approval must be stated in the work and documented.
vii)In studies requiring ethics committee permission, information about the permit (name of the board, date and number) should be included in the method section as well as on the first / last page of the article.

Reviewers' Responsibilities

Following a manuscript submission, one of the members of the e-LSE Editorial Board determines appropriate reviewers according to subject of the manuscript. e-LSE reviewer pool, DergiPark User Pool or new reviewer invitation are the source of the potential reviewers. Before the review process, works are checked for plagiarism using iThenticate by the Editorial Assistants. Works could be rejected after plagiarism process. Invited reviewers are free to accept or decline the invitation. If the reviewer accepts invitation, however he/she understands that he/she has not insufficient knowledge about the subject, should inform the Section Editor and cancel the process. e-LSE review process is single blind and done by at least two independent reviewers who are experts in their fields. Generally, the review invitation of the third reviewer depends on if the two reviewers’ decision are quite different. After reviewer accepts invitation, it is expected to return his/her response using "e-LSE Reviewer Recommendation and Comments" form within 21 days. The review decision could be one of them: Accept, Minor Revision, Major Revision, Reject and Resubmit, Reject. Next, decision of the review is informed to the author(s).

All manuscripts are confidential and should not be shared with any other people in any way.

Reviewers should reconsider the work, write their reports and make their decisions in an objective way. The opinions about the work should be presented unbiased and based on scientific values. Personal preferences about the work and personal thoughts about the author(s) should not affect the decision process. Reviewers should not make personal criticism of the author(s). Reviewers should express their opinions evidently with associated arguments. A review should evaluate works for their scholarly content, regardless of folk, gender, religious, ethnic provenance, nationality, or political idea of the authors.

Reviewers’ research must not reveal the unpublished materials without permission from the author(s) in his/her research. Private information or ideas should be kept confidential during the review process and should not be used for personal benefits. If the reviewer has a conflict of interest, connection or association with the author(s), institution or company that conducted the study, he/she should not accept the evaluation and report this to the Section Editor. Since the process is single blind, reviews are not allowed to contact the author(s) directly. If the reviewer needs additional information or additional materials about the work, he/she may request it by notifying the Section Editor.

Decision of the finished review process are informed to the corresponding author(s).

Editors' Responsibilities

Following the pre-review phase, the manuscript is sent to Editor-In-Chief for evaluation. Editor-In-Chief assigns a Section Editor. The Section Editor determines at least two reviewers whose study interest is related with the manuscript and sends the article for evaluation. Generally, the review invitation of the third reviewer depends on if the two reviewers’ decision are quite different. After the reviewers' submission of their evaluations, the Section Editor can decide “Accept”, “Minor Revision”, “Major Revision”, “Reject and Resubmit” or “Reject " according to the comments. After a revision request, the author is required to submit the files: the manuscript file which shows (highlights) changes made (usually known as track changes), the manuscript file in final clean version (which the changes made are accepted), and responses to the comments of the reviewers. After the author(s)' response to the revision request the Section Editor sends the uploaded files to reviewers again. The Section Editor has the right to change previous reviewers or the number of the reviewers. A revision can be requested from the author at most 2 times and otherwise the article is rejected. The Section Editor reports the result of the last evaluation to the Editor-In-Chief.

Copyright infringement and plagiarism are two important issues in the editorial process. In case of occurrence or violation of these situations, the Editors inform the Editor-In-Chief. In these processes, Editor-In-Chief manages legal obligations and compliance/non-compliance with “Copyright and Consent Form”.

The selection of editors and reviewers should be unbiased and based on merit. An editor should evaluate works for their scholarly content, regardless of folk, gender, religious, ethnic provenance, nationality, or political idea of the authors. Editor and any official of the journal should not share any information in the submitted work with anyone other than the authors, reviewers and the publisher during the evaluation process. In case the work is rejected, this information is found as confidential information and should not be shared. Information about the unpublished work must not be used in editor’s own research without the express written consent of the author. Similarly, editors and any official are prohibited from keeping this information confidential and using it for personal benefits. The editors should pay special care to provide a healthy communication throughout evaluation process. If the edit has a conflict of interest, connection or association with the author(s), institution or company that conducted the work, he/she should not accept the editorial process and report this to the Editor-In-Chief. Thus, Editor-In-Chief has to assign a new editor to work.

An editor who provides evidence of erroneous results, plagiarism, duplication or major error in the published article, is obliged to report corrections, retractions or similar situations as a subject of erratum, addendum, corrigendum note to the Editor-In-Chief.

The editor should take reasonable precautions when ethical complaints arise in the submitted/published work. Within the scope of these measures, the author of the work will be asked to respond to complaints and claims. The editor can contact institutions other than authors.

Publication Policy

The e-LSE is a peer reviewed international scientific journal which has an open access policy. The first round of peer review is 30 days average; and the second round is 20 days average.

Publication Process

1. The manuscript is sent with the copyright form in its first submission. Journal secretaries check that the manuscript meets the journal style and spelling rules. Then, the similarity rate is checked in the plagiarism program and if necessary, correction is requested by contacting the author.

2. After the pre-review phase, the manuscript is sent to Editor-In-Chief for evaluation. Editor-In-Chief assigns a Section Editor. The Section Editor determines at least two reviewers whose study interest is related with the manuscript and sends the article for evaluation. Type of peer review is single-blind. The reviewers submit their evaluations via the "e-LSE Reviewer Recommendation and Comments" form. The Section Editor can decide "Accept", "Revision" or "Reject" according to the comments. During the revision process, the author is required to submit the files containing the tracked changes, the final clean version of the manuscript and responses to the comments of the reviewers. The Section Editor sends the files from the author to the reviewers again. The Section Editor has the right to change reviewers or increase their number. A revision can be requested from the author at most 3 times and otherwise the article is rejected. The Section Editor reports the result of the evaluation to the Editor-In-Chief.

3. The Editor-In-Chief makes the final decision, taking into account the Section Editor's suggestion, the comments of the reviewers and the authors' responses.

4. In the case of acceptance, the article is checked for spelling and language control for the last time. At this stage, if necessary, the author is contacted. After these processes, the article is ready for publication and added to the first issue to be published.

* Evaluation process describes the whole process of the work from submission to final decision or publication.

The publication, reading and downloading of articles is free of charge, no fee is charged for any transaction. Likewise, no fee is charged for the peer-review process.

Founding Editor

Deep Learning, Artificial Intelligence (Other)

Editorial Board

Physical Sciences, Physical Chemistry, Mathematical Sciences, Nanotechnology
Information and Computing Sciences, Digital Processor Architectures, Embedded Systems

Section Editor Board

Information Security Management, Networking and Communications, Concurrent/Parallel Systems and Technologies, Database Systems
Information and Computing Sciences, Machine Learning, Data Mining and Knowledge Discovery, Artificial Intelligence, Natural Language Processing
Mineral Stratum and Geochemistry, Mineralogy- Petrography, Geological Sciences and Engineering (Other)
Applied Mathematics
General Geology, Structural Geology and Tectonics
Image Processing, Machine Learning Algorithms, Artificial Intelligence (Other)
Electrical Engineering, Circuits and Systems, Electronics, Power Electronics, Control Theoryand Applications, Renewable Energy Resources , Mechatronics Engineering
Hasan Şahin is an Assistant Professor at Bursa Technical University, Department of Industrial Engineering. He worked at Bandırma Onyedi Eylül University between 2019-2020, between 2002-2019 Kütahya Dumlupınar University. He received his undergraduate and graduate degrees from Kütahya Dumlupınar University Industrial Engineering Department and his Ph.D. degree from Sakarya University, Department of Industrial Engineering in 2018. His research interests are supply chain, information technologies.
Multiple Criteria Decision Making, Industrial Engineering, Manufacturing and Service Systems