Generative Artificial Intelligence Policy

The Journal of Diplomatic Research (JDR) closely monitors technological advances in academic publishing and research processes. The use of large language models such as ChatGPT, Gemini, Claude, and visual generation tools such as DALL-E and Midjourney in academic studies has been rapidly increasing in recent years.

While these technologies offer new opportunities for researchers in areas such as idea generation, language improvement, and research acceleration, they also present various challenges in terms of scientific integrity, originality, accuracy, and responsibility. The primary aim of our journal is to promote the ethical and responsible use of generative AI tools while maintaining the originality and scientific standards of academic studies.

This document has been prepared to define the rules regarding how generative AI technologies can be used in studies submitted to our journal, to clarify the responsibilities that authors must undertake, to determine accepted and prohibited forms of use, and to establish transparency and declaration obligations. The goal of our policy is to ensure high ethical standards and academic integrity in the production and dissemination of scientific knowledge in the field of diplomacy and international relations.

This policy will be regularly updated in parallel with technological developments and changes in academic standards. We expect our authors, reviewers, and editors to carefully review this policy and act in accordance with the stated rules.


Principles and Responsibilities for Authors
Authorship and Responsibility

Generative AI tools such as ChatGPT (OpenAI), Claude, Gemini, etc. cannot be listed as author or co-author under any circumstances. Authors bear personal responsibility for all content, accuracy, integrity, and originality of the article. The use of AI does not eliminate authors' scientific and ethical obligations.

Transparency and Declaration

Every instance where generative AI tools are used in the article must be clearly declared. The declaration should be made in the "Method" section or an appropriate section of the article. The declaration must detail the full name of the tools used, version information, manner of use, and purpose.


Permitted Use Areas
1. Language and Readability Improvements

The use of AI for grammar, spelling, punctuation, and language flow corrections in texts created by authors is acceptable. Such corrections should not alter the original content and should only improve readability.

2. Idea Development and Research Planning

AI can be used as a support tool in developing research questions, brainstorming, and planning research design. The conceptual framework and methodology of the research must be based on the author's own scientific perspective in all cases.

3. Coding Assistance and Data Analysis

AI can be used as a support tool in writing code for data analysis purposes. Responsibility for the accuracy and appropriateness of statistical analyses rests with the author.

4. Organization in Literature Reviews

AI can be used as an auxiliary tool in organizing and classifying existing literature. Responsibility for the scope and accuracy of the literature review rests with the author.


Restricted or Prohibited Use Areas
1. Content Creation

The complete writing of fundamental sections of the article such as abstract, introduction, literature review, or discussion by AI is not acceptable. AI outputs should only be considered as a starting point and must be comprehensively revised, developed, and verified by authors.

2. Generation and Interpretation of Results

The use of AI in generating, reporting, or interpreting research findings is not acceptable. Authors are directly responsible for the accuracy, integrity, and validity of data analysis findings.

3. Source Creation and Citation

The creation of fictitious or unverifiable sources through AI tools or citation of non-existent works is strictly prohibited. All sources must be verified by authors and appropriately cited.

4. Academic Writing and Argument Development

The argument structure, theoretical contribution, and main theses of the article must be developed by the author. AI can only be positioned as a supporting tool in these processes.


Warnings and Plagiarism Controls
1. Plagiarism and Erroneous Information

Generative AI tools can plagiarize from existing content or produce false information called "hallucinations." Authors must carefully examine, verify, and subject AI outputs to plagiarism control.

2. Data Privacy and Security

Uploading confidential, personal, or sensitive research data to generative AI platforms can pose data security risks. Authors should review the privacy policies of the tools they use and take necessary security measures.

Procedures to be Applied in Case of Policy Violation
  • The article may be rejected if AI use is not declared or used contrary to the policy.
  • Our journal uses various AI detection tools to detect content generated by generative AI tools.
  • If policy violation is detected in published articles, actions such as article retraction or publication of corrections may be applied.
  • Repeated violations may lead to the rejection of the author's future submissions by our journal.

Generative Artificial Intelligence Use Declaration Form

All authors submitting articles must complete and upload the Generative Artificial Intelligence Use Declaration Form to the system. This form must include the following information:

  • Whether a generative AI tool was used
  • Full name and version information of the tools used
  • Detailed description of the purpose and scope of use
  • How it was used in which sections
  • Complete and accurate information regarding verification and control processes

The corresponding author must upload the Generative Artificial Intelligence Use Declaration Form with wet signature during article submission.

Click to download the Generative Artificial Intelligence Use Declaration Form.


Principles and Responsibilities for Editors and Reviewers

The Journal of Diplomatic Research values the meticulous fulfillment of duties by its editors and reviewers in evaluation processes. The proliferation of generative AI technologies also affects evaluation processes.

Policies for Editors
Confidentiality and Intellectual Property Responsibility

Editors must not upload unpublished articles or related files, images, and information to generative AI tools. Protecting the confidentiality of article contents and the intellectual property rights of authors is the fundamental responsibility of editors.

AI Use in the Evaluation Process

Editors can use AI tools in the article evaluation stage (e.g., suitability check, reviewer selection) only in the manner approved by journal management. Any AI use must be notified to authors.


Policies for Reviewers
Confidentiality and Ethical Responsibility

Reviewers must not upload unpublished articles or related materials sent to them for evaluation to generative AI tools under any circumstances.

AI Use in the Evaluation Process

Reviewers should not use generative AI tools for article evaluation or analysis purposes. Evaluations must be based on the reviewer's own expertise and knowledge. Reviewers may use AI tools only to improve the language and readability of their own evaluation reports. Even in this case, the reviewer is fully responsible for the accuracy and integrity of the report's content.


Permitted Use Areas
1. Conceptual Diagrams and Explanatory Visuals

Generative AI can be used to visualize theoretical frameworks, conceptual models, or processes. Such visuals must accurately represent the author's own understanding and explanations.

2. Data Visualization

Authors are encouraged to use AI tools to visualize their own research data. These tools can be used to enhance visual quality in the design of graphs, charts, and tables.

3. Illustrations and Representative Visuals

AI can be used to create representative visuals or illustrations to explain concepts in the field of diplomacy and international relations. Such visuals should help readers understand concepts and should not be misleading.


Matters Requiring Attention
  • The underlying data of graphs and tables containing actual research data must not be manipulated.
  • AI tools should be used as auxiliary tools in data visualization design only, not for data generation purposes.
  • If photographs representing real situations or events are created with AI, it must be clearly stated that they are representative in nature.
  • This is particularly important for visuals showing diplomatic settings, human interactions, or real individuals.
  • Generated visuals must comply with ethical principles and must not contain misleading, discriminatory, or stereotyping content.
  • Cultural sensitivities and diversity must be taken into consideration.


Future Updates

This policy will be regularly reviewed and updated in line with technological advances and changes in academic standards. We encourage our authors to use AI tools while keeping this policy in mind and preserving the principles of scientific integrity.

The Journal of Diplomatic Research will continue to support innovative approaches in academic research while protecting scientific standards and ethical principles. We believe that maintaining this balance will strengthen both the quality of our journal and our contribution to the field of science.

Last Update Time: 3/27/26

by.png The Journal of Diplomatic Research (JDR) is licensed under the Creative Commons Attribution 4.0 International Legal Code.