Authors’ Core Principles and Responsibilities
Authorship and Accountability:
Generative artificial intelligence (AI) systems must never be listed as authors or co-authors of scholarly manuscripts. Authors retain full responsibility for the originality, accuracy, and integrity of their submissions. The involvement of AI tools does not diminish the scientific, ethical, or intellectual obligations of the authors. Any attempt to fabricate authorship or falsify identity using AI is strictly forbidden.
Transparency and Disclosure:
Whenever AI tools are employed during research, writing, or related stages, their use must be explicitly declared in the manuscript. This declaration should appear in the “Methods” or “Acknowledgements” section, depending on relevance, and must include the full names, version numbers, and specific purposes of the tools utilized.
Editorial Policies on AI Use
Confidentiality and Intellectual Property:
Editors are prohibited from uploading unpublished manuscripts or supplementary materials to AI platforms. Protecting the confidentiality of submitted work and safeguarding authors’ intellectual property are fundamental editorial duties.
Editorial Workflow Use:
Editors may apply AI tools in restricted tasks (e.g., initial eligibility checks, reviewer identification) only with explicit authorization from journal management. Such usage must always be communicated to the authors.
Managing Misuse or Concerns:
In cases of uncertainty regarding AI use, editors should request clarification from authors and, if needed, escalate the matter to journal management.
Evaluation of Author Declarations:
Editors must critically assess author statements on AI use and request additional details where necessary. They are responsible for ensuring compliance with journal policy.
Staying Updated:
Editors are expected to stay informed about advances in generative AI technologies and remain aligned with evolving journal policies.
Reviewer Policies on AI Use
Detection of AI Use:
Reviewers are encouraged to identify possible undeclared AI usage in manuscripts and notify editors if such cases arise. However, these evaluations must rely on objective criteria.
Confidentiality and Ethics:
Reviewers must not upload manuscripts or related peer review materials to AI platforms. Reviews should be based solely on the reviewer’s expertise and judgment.
Ethical Evaluation:
Reviewers should approach AI-related issues impartially, without bias, and in alignment with journal policy. Any feedback must be constructive and policy-driven.
Permissible Areas of AI Use
Conceptual and Explanatory Visuals: AI may assist in visualizing theoretical concepts or frameworks.
Data Visualization: AI tools may be used to improve clarity and design in charts, graphs, and tables.
Illustrative Visuals: AI-generated visuals may simplify complex ideas to support reader understanding.
Restricted and Prohibited Areas
Content Generation: AI should not produce substantive manuscript sections such as abstracts, introductions, reviews, or discussions.
Research Findings: AI must not generate, interpret, or report research data or results.
References and Citations: The creation of fabricated or unverifiable references via AI is prohibited.
Scholarly Argumentation: Core arguments, theoretical insights, and contributions must be developed solely by the authors.
Policy Violation Procedures
Failure to disclose AI use or violation of these guidelines may result in manuscript rejection during review. If discovered post-publication, corrective measures—including article retraction or formal corrections—may be implemented. Repeated or serious breaches may result in denial of future submissions.