Generative Artificial Intelligence Policy

1. Fundamental Principles and Responsibilities for the Use of Generative AI by Authors
1.1. Authorship and Responsibility

  • Generative artificial intelligence (AI) tools cannot be cited as authors or co-authors under any circumstances.
  • The originality, accuracy, and scientific validity of all content in articles are entirely the responsibility of the authors.
  • The use of generative AI does not eliminate authors' scientific and ethical responsibilities.
  • Listing AI tools as authors is considered an ethical violation and may result in article rejection or retraction.
1.2. Transparency and Disclosure

Full disclosure is mandatory whenever generative AI tools are used. The disclosure must be included in an appropriate section of the article, such as ‘Methods’, “Acknowledgements”, ‘Ethical Statements’ or ‘Generative Artificial Intelligence Usage Statement’.

The disclosure must include the following information:

  • The name of the tool used (e.g. ChatGPT, GrammarlyGO, Copilot, etc.)
  • Version number (if applicable)
  • Purpose and scope of use (e.g. language editing, code assistance, literature summarisation, etc.)

A separate declaration is not required for tools used solely for language, spelling, punctuation, or formatting purposes.

2. Permitted Areas of Use
2.1. Language and Readability Improvements

  • It is acceptable for authors to use AI tools for grammar, narrative fluency, and formal editing purposes in their own texts.
  • These edits should not alter the scientific content of the text, but only improve its readability.
2.2. Idea Development and Research Planning

  • AI tools may be used as an aid in generating research ideas or preparing a work plan.
  • However, the scientific framework, methods, and interpretations of the work must be shaped entirely by the authors' original contributions.
2.3. Code Assistance and Data Analysis

  • AI-supported tools may be used in writing data processing, analysis, or visualisation code.
  • Such uses must be declared; the accuracy, validity, and appropriateness of the results are the author's responsibility.
2.4. Literature Review and Organisation

  • AI tools may be used as auxiliary tools in classifying, summarising, or thematically organising the literature.
  • However, the accuracy of literature interpretations and citations is entirely the responsibility of the author.
3. Restricted or Prohibited Areas of Use
3.1. Content Creation

  • The abstract, introduction, literature review, findings, discussion, or conclusion sections of an article should not be generated entirely by generative AI.
  • AI may only assist in idea generation or initial draft preparation; the final text must be edited, verified, and developed by the human author.
3.2. Generating and Interpreting Results

  • Generative AI tools may not be used to create, report, or interpret experimental or analytical results.
  • The full scientific validity of data analyses is the responsibility of the authors.
3.3. Source Creation and Citation

  • It is strictly prohibited to create fabricated, unverified, or non-existent sources using Generative AI.
  • All citations must be manually verified by the author and sourced from reliable sources.
3.4. Academic Writing and Argument Development

  • The development of the theoretical framework, argument, and discussion sections must be entirely the academic contribution of the authors.
  • AI can only provide limited support at these stages; it cannot produce the main content.
4. Policy on the Use of AI in Creating Visuals, Graphics, and Tables
4.1. Transparency and Disclosure

  • If generative AI has been used in the creation of visuals, graphics, or tables, this must be stated below the visual or in its description.
  • The name, version, and purpose of use of the tool used must be clearly stated.
4.2. Scientific Accuracy and Responsibility

  • The scientific accuracy and appropriateness of all generated visuals are the responsibility of the authors.
  • Visuals representing research data or containing experimental results that are entirely generated by AI are only accepted with the editor's approval.
4.3. Permitted Types of Use

  1. Conceptual Diagrams/Schemes: May be used to explain theoretical models, but must accurately reflect the author's original interpretation.
  2. Data Visualisation: AI may be used to convert existing data into graphs or tables; the data must be verified by the author.
  3. Representational/Explanatory Images: Images supporting conceptual explanations may be created, but they must not be misleading to the reader.
5. Policy on the Use of Generative AI for Editors
5.1. Confidentiality and Intellectual Property

  • Editors must not upload unpublished manuscripts or data to generative AI tools in any way.
  • Article confidentiality and the protection of authors' intellectual property rights are fundamental responsibilities.
5.2. Use During the Review Process

  • Editorial decisions should be based solely on human judgement.
  • AI tools may only be used in authorised processes such as eligibility checks, language checks, or referee assignment support.
  • Authors should be informed of any AI usage.
5.3. Review of Author Declarations

  • Editors should carefully review authors' declarations regarding AI use and request additional clarification when necessary.
  • It is the editor's responsibility to assess whether the use complies with this policy.
5.4. Management of Suspicious Cases

  • In unclear cases, authors may be asked to provide additional evidence or clarification.
  • Serious ethical concerns are evaluated by the Journal Editorial Board.
5.5. Policy Updates

  • Editors are responsible for regularly monitoring developments in generative AI technologies and updates to journal policies.
6. Generative AI Usage Policy for Reviewers
6.1. Confidentiality and Ethical Responsibility

  • Reviewers must not upload manuscripts submitted for review to generative AI tools under any circumstances.
  • Such actions are considered breaches of confidentiality and intellectual property.
6.2. Use During the Review Process

  • Reviewers should rely on their own scientific expertise during the review process; they should not have AI perform the review.
6.3. Detection of AI Use

  • If undeclared AI use is detected, the situation should be reported to the editor with supporting evidence.
6.4. Review Ethics

  • Reviewers should be fair, objective, and impartial in their assessments regarding AI use.
  • Personal opinions or biases should not override the journal's ethical policies.
7. Procedures to be Applied in Case of Policy Violation

  • Articles may be rejected if the use of generative AI is not disclosed or if it is used in a manner contrary to this policy.
  • If a violation is detected in published articles, the journal may initiate a retraction or correction process.
  • Repeated violations may result in the rejection of future submissions by the authors concerned and, if deemed necessary, notification of their institutions.
8. Updating the Policy

  • This policy is reviewed at least once a year by the Editorial Board in line with rapid developments in generative AI technologies. New versions come into effect on the date they are published on the journal's website.

Last Update Time: 11/1/25