AI Policy

KAREFAD (Journal of the Karatekin Faculty of Letters) Artificial Intelligence (AI) Policy (2025)


Purpose and Scope

KAREFAD acknowledges the increasing use of artificial intelligence (AI) and generative AI technologies in scholarly work and promotes the ethical, transparent, and responsible use of these technologies. This policy serves as a guideline for authors, reviewers, and editors and has been formulated in accordance with the principles of ethical publishing. Furthermore, it has been prepared in alignment with the Ethical Guidelines for the Use of Generative Artificial Intelligence in Scientific Research and Publication Activities (2024), published by the Turkish Council of Higher Education (YÖK).

1. For Authors


1.1. AI-Assisted Writing
Authors may utilize AI-assisted tools for the purpose of improving grammar, spelling, or structure during the writing process (e.g., language checking and structural suggestions). Declaration of such assistive AI use is not mandatory. However, the responsibility for the content's accuracy and compliance with academic standards rests entirely with the authors. The final version of the manuscript must be carefully reviewed by the author(s) to correct any potential biases or errors.

1.2. Use of Generative AI
The use of generative AI tools (e.g., ChatGPT, DALL·E, etc.) for producing text, images, bibliographies, or other content must be explicitly declared. The declaration should be included in the "Methodology" or "Acknowledgments" section, specifying the name of the tool, its version, and the purpose of its use. Content generated by AI must be verified by the author(s), checked for impartiality and factual accuracy, and its originality confirmed. Generative AI tools cannot be credited as an author or co-author.

Note: The use of AI for technical support purposes, such as literature searches, data labeling, translation, or language checking, is permissible; however, the scientific validity of the content used must be confirmed by the researcher, and the final content remains the author's responsibility.

1.3. Use of AI for Visuals
If AI is used to generate visuals, graphical abstracts, or cover designs, this must be clearly stated, and compliance with copyright regulations must be ensured. Visuals should be solely illustrative, representative, and offer conceptual support to the reader; they must not be used in a manner that distorts data. Proper citation and ethical use are essential.

1.4. Prohibited Uses
The following applications of AI are strictly prohibited:

Extensive use of AI to generate entire sections such as the abstract, introduction, or discussion.

Generation of fictitious references or invalid citations via AI.

Generation or interpretation of research data via AI.

Uploading personal data to AI tools without anonymization.

Concealing the use of AI in ethics committee applications.

Authors must ensure the accuracy of their content and that the scientific contribution is their own. In studies requiring ethics committee approval, explicit information regarding AI use must be provided.

2. For Reviewers


2.1. Confidentiality and AI
Reviewers must not upload manuscripts or their content submitted for review to any AI tool. This action may violate confidentiality and intellectual property rights. All comments must be the reviewer's own assessment, and feedback based on artificial intelligence should not be used.

2.2. Reporting of Suspicious Cases
If reviewers suspect that an author has not made an appropriate declaration regarding AI use, they must inform the editor. All evaluations must be based on objective standards and be free from personal bias. Where necessary, reviewers may comment on the scientific quality and ethical compliance of the AI contribution in the manuscript.

3. For Editors


3.1. Responsibility in AI Use
Editors must not upload submitted manuscripts to AI systems during the review process. Even if AI assistance is used in decision letters or internal correspondence, the full responsibility for the content lies with the editor. Any AI-assisted content must be reviewed with academic rigor, and the potential risks of bias, hallucination, and error originating from AI must be considered.

3.2. Evaluation of Declarations
Editors must carefully review authors' AI declarations and request further clarification in cases of suspicion or inadequacy. The appropriateness of AI-generated content falls within the editor's ethical responsibilities.

3.3. Monitoring Current Policies
Editors must monitor developments in the field of generative AI and national regulations (e.g., YÖK guidelines) and must keep the KAREFAD policy text up-to-date.

4. Procedures in Case of Violation


In the event of non-declaration or unethical use of AI, the submission process may be suspended, the review may be cancelled, or published articles may be subject to a correction or retraction. In cases of severe or repeated violations, the author's future submissions may be rejected.

Additional Note: Failure to declare AI-generated content, creation of fictitious references, or violation of ethics committee procedures may lead to disciplinary liability (YÖK Guidelines, pp. 13, 19).

This policy has been specifically prepared for KAREFAD, taking into consideration current publishing standards (COPE, WAME, etc.), the practices of major publishers (Elsevier, Sage, Springer, etc.), and the Turkish Council of Higher Education's "Ethical Guide for the Use of Generative Artificial Intelligence in Scientific Research and Publication Activities of Higher Education Institutions" (2024).

Last Update Time: 7/8/25, 3:57:56 PM