Policy on the Use of Generative Artificial Intelligence (GAI) for Scientific Journals
1. Purpose and Scope
The purpose of this policy is to ensure that generative artificial intelligence (GAI) technologies are used in an ethical, transparent, responsible, and auditable manner within scientific publishing processes, while safeguarding research integrity, scientific reliability, and academic honesty.
This policy applies to all manuscripts submitted to the journal and to authors, editors, and reviewers involved in the preparation, evaluation, peer-review, and publication stages of these manuscripts. The policy has been prepared in accordance with DergiPark principles, COPE (Committee on Publication Ethics) guidelines, and national and international ethical regulations.
2. Fundamental Principles
The following principles govern the use of generative artificial intelligence:
• Human responsibility and accountability: GAI is a tool; full ethical and scientific responsibility for all generated content lies with humans.
• Transparency: Any use of GAI must be explicitly disclosed.
• Accuracy and verifiability: All information, data, and references generated by GAI must be verified.
• Research integrity: GAI must not be used in ways that lead to fabrication, distortion, or plagiarism.
• Human oversight: Scientific decisions, evaluations, and interpretations must rely on human judgment.
3. Rules for Authors
Generative artificial intelligence tools (large language models, chatbots, visual or graphic generation tools, etc.) cannot be considered authors and must not be included in the list of authors.
Authorship implies human intellectual contribution, academic responsibility, and accountability. The author(s) / corresponding author(s) accept full responsibility for any AI-related, ethical, moral, or legal issues that may arise within the publication, before publication, or after publication. It is hereby acknowledged and declared that editors, reviewers, editorial boards, or affiliated universities bear no legal responsibility for such issues. This acceptance applies retroactively and prospectively, is time-independent, and covers all individuals who submit manuscripts to the journal.
3.1. Permitted Uses
Authors may use generative artificial intelligence tools for the following supportive purposes:
• Grammar and spelling checks,
• Text simplification and stylistic improvement,
• Summarization and translation,
• Brainstorming and idea development (provided that the final text belongs to the authors),
• Figure, graph, or visual draft suggestions (provided that this use is declared in the methodology section).
3.2. Disclosure Requirement
• If generative artificial intelligence has been used during manuscript preparation, this must be clearly stated within the manuscript or in a designated disclosure section specified by the journal.
• The disclosure must include:
• The name of the AI tool used (e.g., ChatGPT, Gemini, Claude, Midjourney, DeepSeek, etc.),
• The purpose of use (text generation, literature review, data analysis, visual generation, etc.),
• The scope and nature of the use.
3.3. Author Responsibility
Authors:
• Are fully responsible for the scientific accuracy, originality, and ethical compliance of all content (text, data, code, visuals, etc.) generated or supported by generative artificial intelligence.
• Are obliged to verify GAI-generated information and provide appropriate citations where necessary.
• Must carefully review and control for potential errors, omissions, biases, or fabricated content produced by GAI tools.
The use of generative artificial intelligence does not reduce or transfer the author’s responsibility.
4. Principles for Editors
• Editors verify the existence and appropriateness of GAI usage disclosures.
• During the editorial evaluation process, attention is given to:
• Bibliographic consistency,
• Authenticity of references,
• Compliance with transparency principles.
• Editors may request additional explanations or documentation from authors when deemed necessary.
• If ethical risks related to GAI usage are identified, actions are taken in accordance with COPE guidelines.
5. Rules for Reviewers
• The peer-review process is based on human expertise and scientific judgment.
• Reviewers are discouraged or restricted from using generative artificial intelligence to:
• Generate review reports,
• Analyze manuscript content.
• Reviewers may inform editors about transparency or consistency concerns regarding the use of generative artificial intelligence in manuscripts.
6. Prohibited Uses
The following practices are strictly prohibited:
• Generating fabricated data, information, or references using GAI,
• Citing publications that do not exist,
• Transforming others’ work via GAI without proper citation,
• Presenting GAI outputs as the author’s original contribution,
• Using GAI to conceal ethical violations.
Such actions constitute violations of scientific publication ethics.
7. Detection, Investigation, and Sanctions
• The journal may examine manuscripts, when necessary, using:
• Plagiarism detection tools,
• Generative artificial intelligence detection tools.
• In cases of suspected GAI-related ethical violations:
• The author is asked to provide a defense,
• An editorial investigation is initiated,
• Decisions are made in line with COPE guidelines.
• Depending on the nature of the violation, actions may include:
• Rejection of the manuscript,
• Retraction of a published article,
• Notification of relevant institutions.
Papers are controlled by the Ithenticate program against plagiarism
All published articles have the DOI number.
JOMELIPS is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0
International (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
.