The Journal of Diplomatic Research (JDR) closely monitors technological advances in academic publishing and research processes. The use of large language models such as ChatGPT, Gemini, Claude, and visual generation tools such as DALL-E and Midjourney in academic studies has been rapidly increasing in recent years.
While these technologies offer new opportunities for researchers in areas such as idea generation, language improvement, and research acceleration, they also present various challenges in terms of scientific integrity, originality, accuracy, and responsibility. The primary aim of our journal is to promote the ethical and responsible use of generative AI tools while maintaining the originality and scientific standards of academic studies.
This document has been prepared to define the rules regarding how generative AI technologies can be used in studies submitted to our journal, to clarify the responsibilities that authors must undertake, to determine accepted and prohibited forms of use, and to establish transparency and declaration obligations. The goal of our policy is to ensure high ethical standards and academic integrity in the production and dissemination of scientific knowledge in the field of diplomacy and international relations.
This policy will be regularly updated in parallel with technological developments and changes in academic standards. We expect our authors, reviewers, and editors to carefully review this policy and act in accordance with the stated rules.
Generative AI tools such as ChatGPT (OpenAI), Claude, Gemini, etc. cannot be listed as author or co-author under any circumstances. Authors bear personal responsibility for all content, accuracy, integrity, and originality of the article. The use of AI does not eliminate authors' scientific and ethical obligations.
Every instance where generative AI tools are used in the article must be clearly declared. The declaration should be made in the "Method" section or an appropriate section of the article. The declaration must detail the full name of the tools used, version information, manner of use, and purpose.
The use of AI for grammar, spelling, punctuation, and language flow corrections in texts created by authors is acceptable. Such corrections should not alter the original content and should only improve readability.
AI can be used as a support tool in developing research questions, brainstorming, and planning research design. The conceptual framework and methodology of the research must be based on the author's own scientific perspective in all cases.
AI can be used as a support tool in writing code for data analysis purposes. Responsibility for the accuracy and appropriateness of statistical analyses rests with the author.
AI can be used as an auxiliary tool in organizing and classifying existing literature. Responsibility for the scope and accuracy of the literature review rests with the author.
The complete writing of fundamental sections of the article such as abstract, introduction, literature review, or discussion by AI is not acceptable. AI outputs should only be considered as a starting point and must be comprehensively revised, developed, and verified by authors.
The use of AI in generating, reporting, or interpreting research findings is not acceptable. Authors are directly responsible for the accuracy, integrity, and validity of data analysis findings.
The creation of fictitious or unverifiable sources through AI tools or citation of non-existent works is strictly prohibited. All sources must be verified by authors and appropriately cited.
The argument structure, theoretical contribution, and main theses of the article must be developed by the author. AI can only be positioned as a supporting tool in these processes.
Generative AI tools can plagiarize from existing content or produce false information called "hallucinations." Authors must carefully examine, verify, and subject AI outputs to plagiarism control.
Uploading confidential, personal, or sensitive research data to generative AI platforms can pose data security risks. Authors should review the privacy policies of the tools they use and take necessary security measures.
All authors submitting articles must complete and upload the Generative Artificial Intelligence Use Declaration Form to the system. This form must include the following information:
The corresponding author must upload the Generative Artificial Intelligence Use Declaration Form with wet signature during article submission.
Click to download the Generative Artificial Intelligence Use Declaration Form.
The Journal of Diplomatic Research values the meticulous fulfillment of duties by its editors and reviewers in evaluation processes. The proliferation of generative AI technologies also affects evaluation processes.
Editors must not upload unpublished articles or related files, images, and information to generative AI tools. Protecting the confidentiality of article contents and the intellectual property rights of authors is the fundamental responsibility of editors.
Editors can use AI tools in the article evaluation stage (e.g., suitability check, reviewer selection) only in the manner approved by journal management. Any AI use must be notified to authors.
Reviewers must not upload unpublished articles or related materials sent to them for evaluation to generative AI tools under any circumstances.
Reviewers should not use generative AI tools for article evaluation or analysis purposes. Evaluations must be based on the reviewer's own expertise and knowledge. Reviewers may use AI tools only to improve the language and readability of their own evaluation reports. Even in this case, the reviewer is fully responsible for the accuracy and integrity of the report's content.
Generative AI can be used to visualize theoretical frameworks, conceptual models, or processes. Such visuals must accurately represent the author's own understanding and explanations.
Authors are encouraged to use AI tools to visualize their own research data. These tools can be used to enhance visual quality in the design of graphs, charts, and tables.
AI can be used to create representative visuals or illustrations to explain concepts in the field of diplomacy and international relations. Such visuals should help readers understand concepts and should not be misleading.
This policy will be regularly reviewed and updated in line with technological advances and changes in academic standards. We encourage our authors to use AI tools while keeping this policy in mind and preserving the principles of scientific integrity.
The Journal of Diplomatic Research will continue to support innovative approaches in academic research while protecting scientific standards and ethical principles. We believe that maintaining this balance will strengthen both the quality of our journal and our contribution to the field of science.
The Journal of Diplomatic Research (JDR) is licensed under the Creative Commons Attribution 4.0 International Legal Code.