History Studies recognises the increasing use of artificial intelligence (AI) tools in academic research and scholarly publishing. Assistive AI tools, such as spelling and grammar checkers, may be used to improve the technical quality of a text written by the author. In contrast, when generative AI systems (such as large language models including ChatGPT, Gemini, or Claude) are used for text generation or data analysis, the nature and scope of this use must be clearly declared by the author(s). The use of AI tools does not in any way diminish the authors’ responsibility for the originality, accuracy, and scholarly integrity of their work.
Authors are fully responsible for the entire content of manuscripts submitted to History Studies. AI tools cannot be listed as authors or co-authors of a scholarly work. Content generated by AI may not be used without appropriate human review and verification. Misleading content, the generation of synthetic data, fabricated references, or manipulated images constitute breaches of academic ethics. Any outputs suggested or generated by AI tools must be independently verified for accuracy by the author(s).
During the peer-review process, reviewers must not upload submitted manuscripts, data, or images to any generative AI system. Such practices are incompatible with the principles of confidentiality and data security. The only exception concerns tools that operate locally without requiring an internet connection and that do not transmit data to external servers (for example, locally installed models such as DeepSeek or Ollama).
History Studies encourages the responsible, transparent, and ethical use of AI tools in research and academic writing. This policy may be revised as technological developments evolve.
“This study used the AI tool [tool name] for [purpose]. All outputs were reviewed and verified by the author(s).”