Core Principles and Responsibilities for Authors on Generative AI Use
Authorship and Responsibility:
Under no circumstances should generative artificial intelligence (AI) tools be listed as an author or co-author of an academic article. The article's author(s) bear full responsibility for the content, accuracy, and originality of the submitted work. The use of AI tools does not absolve authors of their scientific, ethical, and intellectual responsibilities. The Journal explicitly prohibits the use of AI to generate fictitious authors or misrepresent identity.
Transparency and Declaration:
The use of any AI tool during the research, writing, or editing processes must be clearly and transparently declared within the article. This declaration should be included in the "Methods" or "Acknowledgements" section, as appropriate. The disclosure must detail the specific AI tools used, including their full names and version numbers, along with a description of how and for what purpose they were employed.
Generative AI Usage Policies for Editors
Confidentiality and Intellectual Property Responsibility:
Editors must not upload unpublished manuscripts, associated files, images, or any related information to AI tools. Protecting the confidentiality of submitted content and safeguarding authors' intellectual property rights are fundamental responsibilities of the editors.
Use of AI in the Evaluation Process:
Editors may use AI tools for specific stages of the editorial workflow, such as initial suitability screening or reviewer selection, only with the explicit approval of the journal management. Any such use of AI must be communicated transparently to the authors.
Management of Suspected Misuse:
In cases of uncertainty or concern regarding AI use, editors should engage in open and transparent communication with the authors, requesting supporting evidence when necessary. Situations requiring further investigation should be referred to the journal management for formal assessment.
Evaluation of Authors' AI Use Declarations:
Editors are expected to carefully review authors' declarations concerning the use of AI tools and request clarification or additional information when needed. It is the editor's responsibility to assess whether the declared AI use complies with the journal's established policies.
Monitoring Policy Developments:
Editors should keep abreast of developments in generative AI technologies and stay informed about the journal's updated policies in this area.
Generative AI Usage Policies for Reviewers
Detection of AI Use:
Reviewers are encouraged to be alert to potential undeclared use of AI in the manuscripts they evaluate and to inform the editors if they suspect such use. However, such assessments must be based on clear and objective criteria.
Confidentiality and Ethical Responsibility:
Under no circumstances should reviewers upload unpublished manuscripts or any related documents sent for peer review to generative AI platforms. Such an action could breach confidentiality and infringe upon intellectual property rights. The review process must be conducted based on the reviewer's own expertise and knowledge.
Review Ethics:
Reviewers should evaluate authors' use of AI impartially; personal opinions or biases should not interfere with the journal's established policies. Any feedback or criticism regarding AI use should be constructive and aligned with the journal's official guidelines.
Permitted Uses of AI
Conceptual Diagrams and Explanatory Visuals:
Generative AI may be used to create visual representations of theoretical ideas, conceptual frameworks, or processes. Visuals created in this manner must accurately reflect the author's own understanding and explanations.
Data Visualization:
Authors may use AI tools to enhance the visual presentation of research data. These tools can be particularly useful in improving the design and clarity of graphs, tables, and diagrams.
Graphical Depictions and Representative Visuals:
AI-generated visuals can be used as symbolic or explanatory representations to simplify and clarify complex ideas. Such visuals should aid the reader's understanding and must not distort or misrepresent the concepts described.
Restricted or Prohibited Uses of AI
Content Generation:
It is not acceptable for significant portions of a manuscript—such as the abstract, introduction, literature review, or discussion section—to be generated by AI. AI-generated content should only be treated as a draft or suggestion and must be carefully reviewed, rewritten, and refined by the author(s) to ensure academic rigor and originality.
Generation and Interpretation of Research Findings:
AI tools must not be used to generate, report, or interpret research results. The author(s) bear sole responsibility for the accuracy, scope, and validity of the data analysis.
Reference Generation and Citation:
Using AI tools to generate fabricated, unverifiable, or non-existent sources is strictly prohibited. All cited sources must be verifiable, accurately referenced according to academic standards, and approved by the author(s).
Academic Writing and Argumentation:
The development of the manuscript's core arguments, theoretical contributions, and main thesis is solely the responsibility of the author(s). AI may only be used as an auxiliary tool in the writing process; it cannot replace the author's critical thinking or original academic contribution.
Procedures for Policy Violations:
If the use of AI tools is not clearly declared or is used in violation of these guidelines, the manuscript may be rejected during the evaluation process. If a violation is identified after publication, corrective actions, such as the publication of a correction or the retraction of the article, may be applied. Repeated or serious violations of this policy may result in the rejection of the author(s)' future submissions to the journal.
1. Rules for Authors
1.1. Content generated using generative artificial intelligence (e.g., ChatGPT, Claude, Gemini) or other AI-based tools is not accepted for direct inclusion in the main text of the work. Legal analysis, interpretation, and inferences must be formulated based on the author(s)' own academic competence.
1.2. If generative AI tools are utilized in any manner, the name of the tool(s) used, the purpose of their use, and the extent of their contribution must be explicitly stated within the work.
1.3. Manuscripts found to have used generative AI tools without proper declaration will be subject to immediate rejection and will be considered an ethical violation. Such cases constitute a breach of academic publication ethics and may be reported to the relevant academic ethics boards.
1.4. Generative AI tools cannot be listed as authors. Since AI systems cannot hold intellectual property or assume academic responsibility, they cannot be granted authorship status in any work.
1.5. Generative AI tools may only be used for limited purposes such as generating summaries, extended abstracts, or grammar checks. However, the translation of an entire work using generative AI tools is not acceptable.
2. Rules for Reviewers and Editors
2.1. During the peer-review process, it is strictly prohibited for reviewers or editors to upload manuscripts submitted by authors to any AI platform. Such an action constitutes:
A violation of the blind peer-review principle.
A breach of scientific confidentiality.
A potential cause for legal issues regarding intellectual property rights and will be treated as an ethical violation.
2.2. Editors and reviewers must exercise due academic diligence in identifying content generated by AI and shall initiate ethical violation procedures upon detection of such cases.