Eksen MFD adopts the following principles, drawing on the guidelines of the Committee on Publication Ethics (COPE), to ensure that artificial intelligence technologies are used in a scientific publishing process in an ethical, transparent, responsible, and auditable manner, and to safeguard scientific reliability and academic integrity.
Rules for the Use of Artificial Intelligence by Authors
- Eksen MFD recognizes the potential of generative artificial intelligence (AI) and AI-assisted technologies to enhance researchers’ productivity when used responsibly. However, these tools must always be applied under the supervision and control of the authors, and their use must be properly disclosed.
- The validity of outputs generated by artificial intelligence and automated tools must be verified by the author. The author(s) are responsible for the accuracy, integrity, and validity of the work and its content, as well as for any ethical violations.
- Authors must explicitly disclose the use of generative artificial intelligence beyond basic language correction, editing, and formatting when preparing their manuscripts. Authors who use AI tools in any stage of the research or paper, including the production of visuals or graphical elements, or in data collection and analysis must transparently explain how and which AI tools were used in the Materials and Methods section (or a similar section). In addition, authors must clearly and comprehensively state this information in the “Declaration of AI use” under the “DECLARATIONS” section at the end of the manuscript, as well as in the “Title Page” submitted during the manuscript submission process.
- Automated tools and artificial intelligence cannot be listed as authors.
- Editing a manuscript using artificial intelligence or large language models (LLMs) to improve language and readability is permitted, as it reflects the use of standard tools already employed for spelling and grammar enhancement and relies on existing material created by the authors rather than generating entirely new content. In such cases, the author(s) remain responsible for the original work.
- The use and publication of visuals generated by artificial intelligence tools or large-scale language models are permitted, provided that their use is disclosed in the artificial intelligence usage statement and that they are verified by the authors.
- The use of any AI-generated material—including audio, text, images, or video—that may lead to violations of copyright or personal rights in the manuscript or publication process is not permitted.
Rules for the Use of Artificial Intelligence by Editors and Reviewers
- Reviewers and editors must not use generative artificial intelligence to produce their evaluations due to risks such as breaches of confidentiality, superficial or non-specific feedback, bias, hidden manipulation, fabricated references, and misinformation; they must not upload the manuscript or any part of it to AI-based tools. However, editing and rewriting are acceptable provided that they are properly disclosed.
- Generative artificial intelligence and AI-assisted tools should not be used in peer review and editorial decision-making processes that require critical and original evaluation, as they may produce inaccurate, incomplete, or biased outcomes.
- The routine use of automated tools by the journal or publisher must be disclosed. Such tools must be appropriately tested. The use of automated tools must be carried out under human supervision. Journals must ensure that the automatic detection of integrity issues—such as text similarity, figure manipulation or duplication, undisclosed use of generative AI, or automated reviewer suggestions—is verified by an editor or other responsible personnel.